Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

上大学的时候买了第一个笔记本,是联想的小y,y460,配置应该是 i5+2g 内存,500g 硬盘,ati 5650 的显卡,那时候还是比较时髦的带有集显和独显切换的,一些人也觉得它算是一代“神机”,陪我度过了大学四年,还有毕业后的第一年,记得中间还用的比较曲折,差不多第一学期末的时候硬盘感觉有时候会文件操作卡住,去售后看了下,果然硬盘出问题了,还在当场把硬盘的东西都拷贝到新买的移动硬盘里,不过联想的售后还是比较不错的,在保内可以直接免费换,如果出保了估计就要修不起了,后面出了个很大的问题,就是屏幕花屏了,而且它的花屏是横着的会有那种上升的变化,一开始感觉跟湿度还有温度有点关系,天气冷了就很容易出现,天气热了就好一点,出现的概率小一点,而且是合盖后再开会好,过一会又会上升,只是这么度过了保修期,去售后问了下,修一下要两千多,后来是在玉泉学校里面的维修店,好像是花了六百多,省是省了很多,但是后面使用的时候发现有点漏光,而且两三个亮点,总归还是给我那不富裕的经济条件带来了一个正常屏幕,不过是挺影响使用,严格来说都没法用了,后面基本是大半个屏幕花掉,接近整个屏幕花掉,至此大学就是这么用过去了。
噢对了,中间在大二的时候加了一条 2g 的内存,因为像操作系统课需要用虚拟机,2g 内存不太够,不过那会用的都是 32 位的 win7 系统,实际使用用不到 4g 内存,得使用破解补丁才能识别到,后来在大学毕业后由于收入也低,想换其他电脑,特别是 mac,买不起,电脑在那会还会玩玩 DNF,但是风扇声很大,也想优化下,就买了个 msata 的固态硬盘,因为刚好有个口子留着,在那之前一直搜到的是把光驱位拆掉换上 sata 固态硬盘,这对于那会的我来说操作难度实在有点大,换上了msata 固态之后还是有比较大的提升的,把操作系统装到固态硬盘后其他盘就只放数据了,后面还装过黑苹果,只是不小心被室友强制关机了,之后就起不来了,也不知道啥原因,后续想继续按原来的操作装,也没成功,再往后就配了一台台式机,这个笔记本就放在那没啥用了,后面偶尔会拿出来用一下,特别是疫情那会充当了 LD 的临时使用机器。
最近一次就是想把行车记录仪里的视频导出来,结果上传的时候我点了下 win10 的更新,重启后这个机器就起不来了,一开始以为顶多是个重装系统,结果重装也不行,就进不去,插着 U 盘也起不来,一开始还能进 BIOS,后面就直接进不去了,像内存条脏了,擦下金手指什么的,都没搞定,想着也只可能是主板上的部件坏了,所以就开始了这趟半作死之旅,买了块主板来换,拆这个笔记本是真的够难的,开机键那一条是真的拆不开,后面还把两个扣子扳坏了,里面的排线插口还几个也拔坏了,幸好用买来的主板装上还能用,但是后面就很奇怪了,因为这个电脑装了 Ubuntu 跟 win10 的双系统,Ubuntu 能正常起来,但是 win10 就是起不来,插上老毛桃或者 win10 的启动盘都是起不来,老毛桃是能起来,但是进不了 pe,插启动盘又报 0xc00000e9 错误码,现在想着是不是这个固态硬盘有点问题或者还是内存问题,只能后续再看了,拆机的时候也找不到视频,机器实在是太老了,后面再试试看。

作为一个 Windows 的老用户,并且也算是 Windows 系统的半个粉丝,但是秉承一贯的优缺点都应该说的原则,Windows 系统有一点缺点是真的挺难受,相信 Windows 用过比较久的都会经历过,就是 U盘无法退出的问题,在比较远古时代,这个问题似乎能采取的措施不多,关机再拔 U盘的方式是一种比较保险的方式,其他貌似有 360这种可以解除占用的,但是需要安装 360 软件,对于目前的使用环境来说有点得不偿失,也是比较流氓的一类软件了,目前在 Windows 环境我主要就安装了个火绒,或者就用 Windows 自带的 defender。

第一种

最近这次经历也是有火绒的一定责任,在我尝试推出 U盘的时候提示了我被另一个大流氓软件,XlibabaProtect.exe 占用了,这个流氓软件真的是充分展示了某里的技术实力,试过 N 多种办法都关不掉也删不掉,尝试了很多种办法也没办法删除,但是后面换了种思路,一般这种情况肯定是有进程在占用 U盘里的内容,最新版本的 Powertoys 会在文件的右键菜单里添加一个叫 File Locksmith 的功能,可以用于检查正在使用哪些文件以及由哪些进程使用,但是可能是我的使用姿势不对,没有仔细看文档,它里面有个”以管理员身份重启”,可能会有用。
这算是第一种方式,

第二种

第二种方式是 Windows 任务管理器中性能 tab 下的”打开资源监视器”,
,假如我的 U 盘的盘符是F:
就可以搜索到占用这个盘符下文件的进程,这里千万小心‼️‼️,不可轻易杀掉这些进程,有些系统进程如果轻易杀掉会导致蓝屏等问题,不可轻易尝试,除非能确认这些进程的作用。
对于前两种方式对我来说都无效,

第三种

所以尝试了第三种,
就是磁盘脱机的方式,在”计算机”右键管理,点击”磁盘管理”,可以找到 U 盘盘符右键,点击”脱机”,然后再”推出”,这个对我来说也不行

第四种

这种是唯一对我有效的,在开始菜单搜索”event”,可以搜到”事件查看器”,
,这个可以看到当前最近 Windows 发生的事件,打开这个后就点击U盘推出,因为推不出来也是一种错误事件,点击下刷新就能在这看到具体是因为什么推出不了,具体的进程信息
最后发现是英特尔的驱动管理程序的一个进程,关掉就退出了,虽然前面说的某里的进程是流氓,但这边是真的冤枉它了

最近或者说很久以前就想着能够把几个散装服务器以及家里的网络连起来,譬如一些remote desktop的访问,之前搞了下frp,因为家里电脑没怎么注意安全性就被搞了一下,所以还是想用相对更安全的方式,比如限定ip和端口进行访问,但是感觉ip也不固定就比较难搞,后来看到了 TailscaleHeadscale 的方式,就想着试试看,没想到一开始就踩了几个比较莫名其妙的坑。
可以按官方文档去搭建,也可以在网上找一些其他人搭建的教程。我碰到的主要是关于配置文件的问题

第一个问题

Error initializing error="failed to read or create private key: failed to save private key to disk: open /etc/headscale/private.key: read-only file system"

其实一开始看到这个我都有点懵了,咋回事呢,read-only file system一般有可能是文件系统出问题了,不可写入,需要重启或者修改挂载方式,被这个错误的错误日志给误导了,后面才知道是配置文件,在另一个教程中也有个类似的回复,一开始没注意,其实就是同一个问题。
默认的配置文件是这样的

---
# headscale will look for a configuration file named `config.yaml` (or `config.json`) in the following order:
#
# - `/etc/headscale`
# - `~/.headscale`
# - current working directory

# The url clients will connect to.
# Typically this will be a domain like:
#
# https://myheadscale.example.com:443
#
server_url: http://127.0.0.1:8080

# Address to listen to / bind to on the server
#
# For production:
# listen_addr: 0.0.0.0:8080
listen_addr: 127.0.0.1:8080

# Address to listen to /metrics, you may want
# to keep this endpoint private to your internal
# network
#
metrics_listen_addr: 127.0.0.1:9090

# Address to listen for gRPC.
# gRPC is used for controlling a headscale server
# remotely with the CLI
# Note: Remote access _only_ works if you have
# valid certificates.
#
# For production:
# grpc_listen_addr: 0.0.0.0:50443
grpc_listen_addr: 127.0.0.1:50443

# Allow the gRPC admin interface to run in INSECURE
# mode. This is not recommended as the traffic will
# be unencrypted. Only enable if you know what you
# are doing.
grpc_allow_insecure: false

# Private key used to encrypt the traffic between headscale
# and Tailscale clients.
# The private key file will be autogenerated if it's missing.
#
# For production:
# /var/lib/headscale/private.key
private_key_path: ./private.key

# The Noise section includes specific configuration for the
# TS2021 Noise protocol
noise:
  # The Noise private key is used to encrypt the
  # traffic between headscale and Tailscale clients when
  # using the new Noise-based protocol. It must be different
  # from the legacy private key.
  #
  # For production:
  # private_key_path: /var/lib/headscale/noise_private.key
  private_key_path: ./noise_private.key

# List of IP prefixes to allocate tailaddresses from.
# Each prefix consists of either an IPv4 or IPv6 address,
# and the associated prefix length, delimited by a slash.
# While this looks like it can take arbitrary values, it
# needs to be within IP ranges supported by the Tailscale
# client.
# IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71
# IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33
ip_prefixes:
  - fd7a:115c:a1e0::/48
  - 100.64.0.0/10

# DERP is a relay system that Tailscale uses when a direct
# connection cannot be established.
# https://tailscale.com/blog/how-tailscale-works/#encrypted-tcp-relays-derp
#
# headscale needs a list of DERP servers that can be presented
# to the clients.
derp:
  server:
    # If enabled, runs the embedded DERP server and merges it into the rest of the DERP config
    # The Headscale server_url defined above MUST be using https, DERP requires TLS to be in place
    enabled: false

    # Region ID to use for the embedded DERP server.
    # The local DERP prevails if the region ID collides with other region ID coming from
    # the regular DERP config.
    region_id: 999

    # Region code and name are displayed in the Tailscale UI to identify a DERP region
    region_code: "headscale"
    region_name: "Headscale Embedded DERP"

    # Listens over UDP at the configured address for STUN connections - to help with NAT traversal.
    # When the embedded DERP server is enabled stun_listen_addr MUST be defined.
    #
    # For more details on how this works, check this great article: https://tailscale.com/blog/how-tailscale-works/
    stun_listen_addr: "0.0.0.0:3478"

  # List of externally available DERP maps encoded in JSON
  urls:
    - https://controlplane.tailscale.com/derpmap/default

  # Locally available DERP map files encoded in YAML
  #
  # This option is mostly interesting for people hosting
  # their own DERP servers:
  # https://tailscale.com/kb/1118/custom-derp-servers/
  #
  # paths:
  #   - /etc/headscale/derp-example.yaml
  paths: []

  # If enabled, a worker will be set up to periodically
  # refresh the given sources and update the derpmap
  # will be set up.
  auto_update_enabled: true

  # How often should we check for DERP updates?
  update_frequency: 24h

# Disables the automatic check for headscale updates on startup
disable_check_updates: false

# Time before an inactive ephemeral node is deleted?
ephemeral_node_inactivity_timeout: 30m

# Period to check for node updates within the tailnet. A value too low will severely affect
# CPU consumption of Headscale. A value too high (over 60s) will cause problems
# for the nodes, as they won't get updates or keep alive messages frequently enough.
# In case of doubts, do not touch the default 10s.
node_update_check_interval: 10s

# SQLite config
db_type: sqlite3

# For production:
# db_path: /var/lib/headscale/db.sqlite
db_path: ./db.sqlite

# # Postgres config
# If using a Unix socket to connect to Postgres, set the socket path in the 'host' field and leave 'port' blank.
# db_type: postgres
# db_host: localhost
# db_port: 5432
# db_name: headscale
# db_user: foo
# db_pass: bar

# If other 'sslmode' is required instead of 'require(true)' and 'disabled(false)', set the 'sslmode' you need
# in the 'db_ssl' field. Refers to https://www.postgresql.org/docs/current/libpq-ssl.html Table 34.1.
# db_ssl: false

### TLS configuration
#
## Let's encrypt / ACME
#
# headscale supports automatically requesting and setting up
# TLS for a domain with Let's Encrypt.
#
# URL to ACME directory
acme_url: https://acme-v02.api.letsencrypt.org/directory

# Email to register with ACME provider
acme_email: ""

# Domain name to request a TLS certificate for:
tls_letsencrypt_hostname: ""

# Path to store certificates and metadata needed by
# letsencrypt
# For production:
# tls_letsencrypt_cache_dir: /var/lib/headscale/cache
tls_letsencrypt_cache_dir: ./cache

# Type of ACME challenge to use, currently supported types:
# HTTP-01 or TLS-ALPN-01
# See [docs/tls.md](docs/tls.md) for more information
tls_letsencrypt_challenge_type: HTTP-01
# When HTTP-01 challenge is chosen, letsencrypt must set up a
# verification endpoint, and it will be listening on:
# :http = port 80
tls_letsencrypt_listen: ":http"

## Use already defined certificates:
tls_cert_path: ""
tls_key_path: ""

log:
  # Output formatting for logs: text or json
  format: text
  level: info

# Path to a file containg ACL policies.
# ACLs can be defined as YAML or HUJSON.
# https://tailscale.com/kb/1018/acls/
acl_policy_path: ""

## DNS
#
# headscale supports Tailscale's DNS configuration and MagicDNS.
# Please have a look to their KB to better understand the concepts:
#
# - https://tailscale.com/kb/1054/dns/
# - https://tailscale.com/kb/1081/magicdns/
# - https://tailscale.com/blog/2021-09-private-dns-with-magicdns/
#
dns_config:
  # Whether to prefer using Headscale provided DNS or use local.
  override_local_dns: true

  # List of DNS servers to expose to clients.
  nameservers:
    - 1.1.1.1

  # NextDNS (see https://tailscale.com/kb/1218/nextdns/).
  # "abc123" is example NextDNS ID, replace with yours.
  #
  # With metadata sharing:
  # nameservers:
  #   - https://dns.nextdns.io/abc123
  #
  # Without metadata sharing:
  # nameservers:
  #   - 2a07:a8c0::ab:c123
  #   - 2a07:a8c1::ab:c123

  # Split DNS (see https://tailscale.com/kb/1054/dns/),
  # list of search domains and the DNS to query for each one.
  #
  # restricted_nameservers:
  #   foo.bar.com:
  #     - 1.1.1.1
  #   darp.headscale.net:
  #     - 1.1.1.1
  #     - 8.8.8.8

  # Search domains to inject.
  domains: []

  # Extra DNS records
  # so far only A-records are supported (on the tailscale side)
  # See https://github.com/juanfont/headscale/blob/main/docs/dns-records.md#Limitations
  # extra_records:
  #   - name: "grafana.myvpn.example.com"
  #     type: "A"
  #     value: "100.64.0.3"
  #
  #   # you can also put it in one line
  #   - { name: "prometheus.myvpn.example.com", type: "A", value: "100.64.0.3" }

  # Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/).
  # Only works if there is at least a nameserver defined.
  magic_dns: true

  # Defines the base domain to create the hostnames for MagicDNS.
  # `base_domain` must be a FQDNs, without the trailing dot.
  # The FQDN of the hosts will be
  # `hostname.user.base_domain` (e.g., _myhost.myuser.example.com_).
  base_domain: example.com

# Unix socket used for the CLI to connect without authentication
# Note: for production you will want to set this to something like:
# unix_socket: /var/run/headscale.sock
unix_socket: ./headscale.sock
unix_socket_permission: "0770"
#
# headscale supports experimental OpenID connect support,
# it is still being tested and might have some bugs, please
# help us test it.
# OpenID Connect
# oidc:
#   only_start_if_oidc_is_available: true
#   issuer: "https://your-oidc.issuer.com/path"
#   client_id: "your-oidc-client-id"
#   client_secret: "your-oidc-client-secret"
#   # Alternatively, set `client_secret_path` to read the secret from the file.
#   # It resolves environment variables, making integration to systemd's
#   # `LoadCredential` straightforward:
#   client_secret_path: "${CREDENTIALS_DIRECTORY}/oidc_client_secret"
#   # client_secret and client_secret_path are mutually exclusive.
#
#   Customize the scopes used in the OIDC flow, defaults to "openid", "profile" and "email" and add custom query
#   parameters to the Authorize Endpoint request. Scopes default to "openid", "profile" and "email".
#
#   scope: ["openid", "profile", "email", "custom"]
#   extra_params:
#     domain_hint: example.com
#
#   List allowed principal domains and/or users. If an authenticated user's domain is not in this list, the
#   authentication request will be rejected.
#
#   allowed_domains:
#     - example.com
# Groups from keycloak have a leading '/'
#   allowed_groups:
#     - /headscale
#   allowed_users:
#     - alice@example.com
#
#   If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed.
#   This will transform `first-name.last-name@example.com` to the user `first-name.last-name`
#   If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following
#   user: `first-name.last-name.example.com`
#
#   strip_email_domain: true

# Logtail configuration
# Logtail is Tailscales logging and auditing infrastructure, it allows the control panel
# to instruct tailscale nodes to log their activity to a remote server.
logtail:
  # Enable logtail for this headscales clients.
  # As there is currently no support for overriding the log server in headscale, this is
  # disabled by default. Enabling this will make your clients send logs to Tailscale Inc.
  enabled: false

# Enabling this option makes devices prefer a random port for WireGuard traffic over the
# default static port 41641. This option is intended as a workaround for some buggy
# firewall devices. See https://tailscale.com/kb/1181/firewalls/ for more information.
randomize_client_port: false

问题就是出在几个文件路径的配置,默认都是当前目录,也就是headscale的可执行文件所在目录,需要按它配置说明中的生产配置进行修改

# For production:
# /var/lib/headscale/private.key
private_key_path: /var/lib/headscale/private.key

直接改成绝对路径就好了,还有两个文件路径
另一个也是个秘钥的路径问题

noise:
  # The Noise private key is used to encrypt the
  # traffic between headscale and Tailscale clients when
  # using the new Noise-based protocol. It must be different
  # from the legacy private key.
  #
  # For production:
  # private_key_path: /var/lib/headscale/noise_private.key
  private_key_path: /var/lib/headscale/noise_private.key

第二个问题

这个问题也是一种误导,
错误信息是

Error initializing error="unable to open database file: out of memory (14)"

这就是个文件,内存也完全没有被占满的迹象,原来也是文件路径的问题

# For production:
# db_path: /var/lib/headscale/db.sqlite
db_path: /var/lib/headscale/db.sqlite

都改成绝对路径就可以了,然后这里还有个就是要对/var/lib/headscale//etc/headscale/等路径赋予headscale用户权限,有时候对这类问题的排查真的蛮头疼,日志报错都不是真实的错误信息,开源项目对这些错误的提示真的也需要优化,后续的譬如mac也加入节点等后面再开篇讲

一年又一年,时间匆匆,这一年过得不太容易,很多事情都是来得猝不及防,很多规划也照例是没有完成,今年更多了一些,又是比较丧的一篇总结

阅读全文 »

DataSource 作为数据库查询的最重要的数据源,在 mybatis 中也展开来说下
首先是解析的过程

SqlSessionFactory sqlSessionFactory = new SqlSessionFactoryBuilder().build(inputStream);

在构建 SqlSessionFactory 也就是 DefaultSqlSessionFactory 的时候,

public SqlSessionFactory build(InputStream inputStream) {
    return build(inputStream, null, null);
  }
public SqlSessionFactory build(InputStream inputStream, String environment, Properties properties) {
    try {
      XMLConfigBuilder parser = new XMLConfigBuilder(inputStream, environment, properties);
      return build(parser.parse());
    } catch (Exception e) {
      throw ExceptionFactory.wrapException("Error building SqlSession.", e);
    } finally {
      ErrorContext.instance().reset();
      try {
      	if (inputStream != null) {
      	  inputStream.close();
      	}
      } catch (IOException e) {
        // Intentionally ignore. Prefer previous error.
      }
    }
  }

前面也说过,就是解析 mybatis-config.xmlConfiguration

public Configuration parse() {
  if (parsed) {
    throw new BuilderException("Each XMLConfigBuilder can only be used once.");
  }
  parsed = true;
  parseConfiguration(parser.evalNode("/configuration"));
  return configuration;
}
private void parseConfiguration(XNode root) {
  try {
    // issue #117 read properties first
    propertiesElement(root.evalNode("properties"));
    Properties settings = settingsAsProperties(root.evalNode("settings"));
    loadCustomVfs(settings);
    loadCustomLogImpl(settings);
    typeAliasesElement(root.evalNode("typeAliases"));
    pluginElement(root.evalNode("plugins"));
    objectFactoryElement(root.evalNode("objectFactory"));
    objectWrapperFactoryElement(root.evalNode("objectWrapperFactory"));
    reflectorFactoryElement(root.evalNode("reflectorFactory"));
    settingsElement(settings);
    // read it after objectFactory and objectWrapperFactory issue #631
    // -------------> 是在这里解析了DataSource
    environmentsElement(root.evalNode("environments"));
    databaseIdProviderElement(root.evalNode("databaseIdProvider"));
    typeHandlerElement(root.evalNode("typeHandlers"));
    mapperElement(root.evalNode("mappers"));
  } catch (Exception e) {
    throw new BuilderException("Error parsing SQL Mapper Configuration. Cause: " + e, e);
  }
}

环境解析了这一块的内容

<environments default="development">
        <environment id="development">
            <transactionManager type="JDBC"/>
            <dataSource type="POOLED">
                <property name="driver" value="${driver}"/>
                <property name="url" value="${url}"/>
                <property name="username" value="${username}"/>
                <property name="password" value="${password}"/>
            </dataSource>
        </environment>
    </environments>

解析也是自上而下的,

private void environmentsElement(XNode context) throws Exception {
  if (context != null) {
    if (environment == null) {
      environment = context.getStringAttribute("default");
    }
    for (XNode child : context.getChildren()) {
      String id = child.getStringAttribute("id");
      if (isSpecifiedEnvironment(id)) {
        TransactionFactory txFactory = transactionManagerElement(child.evalNode("transactionManager"));
        DataSourceFactory dsFactory = dataSourceElement(child.evalNode("dataSource"));
        DataSource dataSource = dsFactory.getDataSource();
        Environment.Builder environmentBuilder = new Environment.Builder(id)
            .transactionFactory(txFactory)
            .dataSource(dataSource);
        configuration.setEnvironment(environmentBuilder.build());
        break;
      }
    }
  }
}

前面第一步是解析事务管理器元素

private TransactionFactory transactionManagerElement(XNode context) throws Exception {
  if (context != null) {
    String type = context.getStringAttribute("type");
    Properties props = context.getChildrenAsProperties();
    TransactionFactory factory = (TransactionFactory) resolveClass(type).getDeclaredConstructor().newInstance();
    factory.setProperties(props);
    return factory;
  }
  throw new BuilderException("Environment declaration requires a TransactionFactory.");
}

而这里的 resolveClass 其实就使用了上一篇的 typeAliases 系统,这里是使用了 JdbcTransactionFactory 作为事务管理器,
后面的就是 DataSourceFactory 的创建也是 DataSource 的创建

private DataSourceFactory dataSourceElement(XNode context) throws Exception {
  if (context != null) {
    String type = context.getStringAttribute("type");
    Properties props = context.getChildrenAsProperties();
    DataSourceFactory factory = (DataSourceFactory) resolveClass(type).getDeclaredConstructor().newInstance();
    factory.setProperties(props);
    return factory;
  }
  throw new BuilderException("Environment declaration requires a DataSourceFactory.");
}

因为在config文件中设置了Pooled,所以对应创建的就是 PooledDataSourceFactory
但是这里其实有个比较需要注意的,mybatis 这里的其实是继承了 UnpooledDataSourceFactory
将基础方法都放在了 UnpooledDataSourceFactory

public class PooledDataSourceFactory extends UnpooledDataSourceFactory {

  public PooledDataSourceFactory() {
    this.dataSource = new PooledDataSource();
  }

}

这里只保留了在构造方法里创建 DataSource
而这个 PooledDataSource 虽然没有直接继承 UnpooledDataSource,但其实
在构造方法里也是

public PooledDataSource() {
  dataSource = new UnpooledDataSource();
}

至于为什么这么做呢应该也是考虑到能比较多的复用代码,因为 Pooled 其实跟 Unpooled 最重要的差别就在于是不是每次都重开连接
使用连接池能够让应用在有大量查询的时候不用反复创建连接,省去了建联的网络等开销,Unpooled就是完成与数据库的连接,并可以获取该连接
主要的代码

@Override
public Connection getConnection() throws SQLException {
  return doGetConnection(username, password);
}

@Override
public Connection getConnection(String username, String password) throws SQLException {
  return doGetConnection(username, password);
}
private Connection doGetConnection(String username, String password) throws SQLException {
  Properties props = new Properties();
  if (driverProperties != null) {
    props.putAll(driverProperties);
  }
  if (username != null) {
    props.setProperty("user", username);
  }
  if (password != null) {
    props.setProperty("password", password);
  }
  return doGetConnection(props);
}
private Connection doGetConnection(Properties properties) throws SQLException {
  initializeDriver();
  Connection connection = DriverManager.getConnection(url, properties);
  configureConnection(connection);
  return connection;
}

而对于Pooled就会处理池化的逻辑

private PooledConnection popConnection(String username, String password) throws SQLException {
    boolean countedWait = false;
    PooledConnection conn = null;
    long t = System.currentTimeMillis();
    int localBadConnectionCount = 0;

    while (conn == null) {
      lock.lock();
      try {
        if (!state.idleConnections.isEmpty()) {
          // Pool has available connection
          conn = state.idleConnections.remove(0);
          if (log.isDebugEnabled()) {
            log.debug("Checked out connection " + conn.getRealHashCode() + " from pool.");
          }
        } else {
          // Pool does not have available connection
          if (state.activeConnections.size() < poolMaximumActiveConnections) {
            // Can create new connection
            conn = new PooledConnection(dataSource.getConnection(), this);
            if (log.isDebugEnabled()) {
              log.debug("Created connection " + conn.getRealHashCode() + ".");
            }
          } else {
            // Cannot create new connection
            PooledConnection oldestActiveConnection = state.activeConnections.get(0);
            long longestCheckoutTime = oldestActiveConnection.getCheckoutTime();
            if (longestCheckoutTime > poolMaximumCheckoutTime) {
              // Can claim overdue connection
              state.claimedOverdueConnectionCount++;
              state.accumulatedCheckoutTimeOfOverdueConnections += longestCheckoutTime;
              state.accumulatedCheckoutTime += longestCheckoutTime;
              state.activeConnections.remove(oldestActiveConnection);
              if (!oldestActiveConnection.getRealConnection().getAutoCommit()) {
                try {
                  oldestActiveConnection.getRealConnection().rollback();
                } catch (SQLException e) {
                  /*
                     Just log a message for debug and continue to execute the following
                     statement like nothing happened.
                     Wrap the bad connection with a new PooledConnection, this will help
                     to not interrupt current executing thread and give current thread a
                     chance to join the next competition for another valid/good database
                     connection. At the end of this loop, bad {@link @conn} will be set as null.
                   */
                  log.debug("Bad connection. Could not roll back");
                }
              }
              conn = new PooledConnection(oldestActiveConnection.getRealConnection(), this);
              conn.setCreatedTimestamp(oldestActiveConnection.getCreatedTimestamp());
              conn.setLastUsedTimestamp(oldestActiveConnection.getLastUsedTimestamp());
              oldestActiveConnection.invalidate();
              if (log.isDebugEnabled()) {
                log.debug("Claimed overdue connection " + conn.getRealHashCode() + ".");
              }
            } else {
              // Must wait
              try {
                if (!countedWait) {
                  state.hadToWaitCount++;
                  countedWait = true;
                }
                if (log.isDebugEnabled()) {
                  log.debug("Waiting as long as " + poolTimeToWait + " milliseconds for connection.");
                }
                long wt = System.currentTimeMillis();
                condition.await(poolTimeToWait, TimeUnit.MILLISECONDS);
                state.accumulatedWaitTime += System.currentTimeMillis() - wt;
              } catch (InterruptedException e) {
                // set interrupt flag
                Thread.currentThread().interrupt();
                break;
              }
            }
          }
        }
        if (conn != null) {
          // ping to server and check the connection is valid or not
          if (conn.isValid()) {
            if (!conn.getRealConnection().getAutoCommit()) {
              conn.getRealConnection().rollback();
            }
            conn.setConnectionTypeCode(assembleConnectionTypeCode(dataSource.getUrl(), username, password));
            conn.setCheckoutTimestamp(System.currentTimeMillis());
            conn.setLastUsedTimestamp(System.currentTimeMillis());
            state.activeConnections.add(conn);
            state.requestCount++;
            state.accumulatedRequestTime += System.currentTimeMillis() - t;
          } else {
            if (log.isDebugEnabled()) {
              log.debug("A bad connection (" + conn.getRealHashCode() + ") was returned from the pool, getting another connection.");
            }
            state.badConnectionCount++;
            localBadConnectionCount++;
            conn = null;
            if (localBadConnectionCount > (poolMaximumIdleConnections + poolMaximumLocalBadConnectionTolerance)) {
              if (log.isDebugEnabled()) {
                log.debug("PooledDataSource: Could not get a good connection to the database.");
              }
              throw new SQLException("PooledDataSource: Could not get a good connection to the database.");
            }
          }
        }
      } finally {
        lock.unlock();
      }

    }

    if (conn == null) {
      if (log.isDebugEnabled()) {
        log.debug("PooledDataSource: Unknown severe error condition.  The connection pool returned a null connection.");
      }
      throw new SQLException("PooledDataSource: Unknown severe error condition.  The connection pool returned a null connection.");
    }

    return conn;
  }

它的入口不是个get方法,而是pop,从含义来来讲就不一样
org.apache.ibatis.datasource.pooled.PooledDataSource#getConnection()

@Override
public Connection getConnection() throws SQLException {
  return popConnection(dataSource.getUsername(), dataSource.getPassword()).getProxyConnection();
}

对于具体怎么获取连接我们可以下一篇具体讲下

0%