After several years of using Windows on physics machines, I finally found a better way to quickly fix, restore, and create a new installation of Windows. – To boot a Windows from VHDX file.

Note
For more information, see this official document. This is not the topic of this article.

However, in the past few years, how I replace the old VHDX with the new one becomes a quite tricky job, certainly from my current perspective.

My old way is as follows:

Assume we have 3 VHDX files placed in X: drive:

  • X:\Windows-11.vhdx (old/my current used one)
  • X:\Windows-11-Recovery.vhdx (mediator)
  • X:\Windows-11-New.vhdx (new one)

If I want to replace the old one with the new one, I have to do the following steps:

  1. Boot into the Windows-11-Recovery.vhdx
  2. Rename the Windows-11.vhdx to Windows-11-Old.vhdx
  3. Rename the Windows-11-New.vhdx to Windows-11.vhdx
  4. Reboot into the new VHDX (Use the same boot menu entry as the old one)

I was thinking about follow the Android A/B partitioning way, and then I suddenly realized that the naming strategy is quite ugly and will make more misunderstanding. So I decided to use the above way.

I could say this is really brilliant, until I got fxxked by the Windows RE.

Notice
Don’t get me wrong, I think the Windows RE is a great feature.
Read this article if you are interested: Windows Recovery Environment explained.
But it’s just fxxked up by other things.

The problem

So what’s the problem?

The problem is, after I boot into the mediator VHDX and finish the renaming job, I reboot into the new VHDX, and then I found that the new VHDX is not bootable. It tells me similar to the below image:

Windows Recovery issue

And then I try to rename it back. But it still tells me the same thing.

So, why is this happening?

The cause

The cause is simple, but it’s not easy to notice.

The Winre.wim file in the Recovery partition is mysteriously disappeared. I assume it is because the Recovery partition is too small to hold the Winre.wim file.

Because when I copy the Winre.wim file from a Windows 11 ISO to the Recovery partition, it tells me that the Recovery partition is too small.

The solution

Thanks to the awesome official document to guide me grab needed commands to fix this issue.

Due to my current EFI and Recovery partition being completely fxxked up, I have to buy a new USB flash drive to and create a new Windows To Go on it. Then I booted into Windows To Go and started to fix the issue.

First I need to recreate my EFI and Recovery partition. I use the diskpart command to do this.

Then I use the bcdboot command to fix the boot files.

Finally, I use the Reagentc command to set the Winre.wim file to the Recovery partition.

After all these steps, I reboot into the VHDX, and it works!

Note
All the commands are available in the references section.

Since I have a Windows To Go USB Flash Drive, I no longer need a mediator VHDX to do the replacement job :).

This can also reduce at least 20GB of space in my X: drive.

References

Important
First thing first, for personal use only. I am not responsible for the content or its consequences.

Introduction

First contact with Kubernetes and GitHub Actions Runner Controller (ARC). Feels a little painful at the beginning :).

Special thanks to Bassem Dghaidi’s awesome video. You can find the link in the References section.

Prerequisites

  • Kubernetes
  • Helm 3
  • minikube (optional)
  • GitHub Organization account

Registering a GitHub App for ARC

Create a new GitHub Apps

Go to https://github.com/organizations/OmicoDev/settings/apps/new

Note
Replace OmicoDev with your organization name.

Then, follow the instructions to create a new GitHub App.

Install the GitHub App to your organization

Go to https://github.com/organizations/OmicoDev/settings/apps/omico-actions-runner-controller/installations and then click Install to install the GitHub App to your organization.

Note
Replace OmicoDev with your organization name.
Replace omico-actions-runner-controller with your GitHub App name.

Install ARC

Create new minikube cluster (optional)

1
minikube start -p arc --cpus=12 --memory=32G --mount

Note
You should modify the parameters according to your needs.

Installing Actions Runner Controller

1
2
3
4
helm install arc \
--namespace arc-systems \
--create-namespace \
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller

Configuring a runner scale set

1
2
3
4
5
6
7
8
9
10
INSTALLATION_NAME="arc-ubuntu-latest"
SECRET_NAME="omico-actions-runner-controller"
GITHUB_CONFIG_URL="https://github.com/OmicoDev"
helm install "$INSTALLATION_NAME" \
--namespace arc-runners \
--create-namespace \
--set githubConfigUrl="$GITHUB_CONFIG_URL" \
--set githubConfigSecret="$SECRET_NAME" \
--set minRunners=3 \
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set

Note
Replace OmicoDev with your organization name.
Replace arc-ubuntu-latest with your runner installation name.
Replace omico-actions-runner-controller with your GitHub App name.
minRunners is optional, you can modify it according to your needs.
For more configurations, please refer to values.yaml of gha-runner-scale-set.

Create a secret for the GitHub App

1
2
3
4
5
6
7
8
9
SECRET_NAME="omico-actions-runner-controller"
GITHUB_APP_ID="114514"
GITHUB_APP_INSTALLATION_ID="114514"
GITHUB_APP_PRIVATE_KEY=$(cat omico-actions-runner-controller.private-key.pem)
kubectl create secret generic "$SECRET_NAME" \
--namespace=arc-runners \
--from-literal=github_app_id=${GITHUB_APP_ID} \
--from-literal=github_app_installation_id=${GITHUB_APP_INSTALLATION_ID} \
--from-literal=github_app_private_key=${GITHUB_APP_PRIVATE_KEY}

Note
Replace omico-actions-runner-controller with your GitHub App name.
Replace 114514 with your GitHub App ID and Installation ID.
Replace omico-actions-runner-controller.private-key.pem with your private key file name. (You can get it from the GitHub App settings page. See Create a new GitHub Apps)

Verify the installation

1
kubectl get pods -n arc-systems

If you see the following pods, then the installation is successful.

1
2
3
NAME                                    READY   STATUS    RESTARTS      AGE
arc-gha-rs-controller-c8d75c47f-9j7st 1/1 Running 2 (65m ago) 172m
arc-ubuntu-latest-754b578d-listener 1/1 Running 0 46m

If the listener pod is missing, please check the logs of arc-gha-rs-controller pod.

1
kubectl logs -n arc-systems $(kubectl get pods -n arc-systems -o=name | grep "pod/arc-gha-rs-controller")

Good luck for debugging :)

Useful commands

Install Dashboard for minikube

1
2
minikube -p arc addons enable metrics-server
minikube -p arc dashboard

Delete the minikube cluster

1
minikube delete -p arc

Delete the runner scale set

1
2
INSTALLATION_NAME="arc-ubuntu-latest"
helm uninstall "$INSTALLATION_NAME" --namespace arc-runners

Get basic information of the ARC secret

1
2
SECRET_NAME="omico-actions-runner-controller"
kubectl describe secret "$SECRET_NAME" --namespace arc-runners

Get ARC secret private key

Note
Be careful with the private key.

1
2
SECRET_NAME="omico-actions-runner-controller"
kubectl get "secrets/${SECRET_NAME}" --template="{{.data.github_app_private_key}}" --namespace arc-runners | base64 -d

Delete the ARC secret

1
2
SECRET_NAME="omico-actions-runner-controller"
kubectl delete secret "$SECRET_NAME" --namespace=arc-runners

References

Important
First thing first, for personal use only. I am not responsible for the content or its consequences.

Ubuntu

1
2
3
sudo apt update && sudo apt upgrade -y
sudo apt install zsh -y
sh -c "$(wget https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh -O -)"

Configuring

1
2
3
4
git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions
git clone https://github.com/zsh-users/zsh-syntax-highlighting.git ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-syntax-highlighting
sed -i 's/ZSH_THEME="robbyrussell"/ZSH_THEME="agnoster"/g' ~/.zshrc
sed -i 's/plugins=(git)/plugins=(git zsh-autosuggestions zsh-syntax-highlighting)/g' ~/.zshrc

Important
First thing first, for personal use only. I am not responsible for the content or its consequences.

1
sudo apt update && sudo apt upgrade -y

Install & configure Oh My Zsh

See here.

Remove Windows environment variables

In /etc/wsl.conf, add the following:

1
2
[interop]
appendWindowsPath = false

Then restart WSL.

1
2
wsl --shutdown
wsl

很久没有写博客了,一直忙于自己手上的各种项目。今天趁着 Sonatype 的问题,我来随便写一篇碎碎念。

起因

让我们先来说说这次事件的起因。老实说,我对 s01.oss.sonatype.org 的意见已经由来已久了。在此之前,我一直保持比较克制的心态去看待并处理这件事情。

UI 反人类,加载慢不说,还时不时加载错误。很难让人相信这是一个 2023 年还被开源开发者广泛使用(也没其他地方可去)的基建。

说了那么多,那么它真的不可替代吗?事实上,从某种角度来说确实是的。

关于 Gradle 的小 tips

我这里分享一个小 tips 给大家,顺便来讲讲为什么它是”不可替代“的。

这里我们会使用这个项目来举例:gradle-project-initializer-template

相信在 2023 年,使用 Gradle 的朋友对 settings.gradle.ktspluginManagement 不再陌生了。

一般来说我们需要在顶层的 settings.gradle.kts 中添加以下代码:

1
2
3
4
5
6
pluginManagement {
repositories {
mavenCentral()
gradlePluginPortal()
}
}

才能使 Composite Build 的模块正常工作,不然 Gradle 会报错说找不到依赖。这里举个简单的报错例子。

以下是完全删除整个 pluginManagement 代码块所出现的报错:

1
2
3
4
5
6
7
8
9
10
11
A problem occurred configuring root project 'gpi-root'.
> Could not determine the dependencies of null.
> Could not resolve all task dependencies for configuration ':classpath'.
> Could not find me.omico.consensus.api:me.omico.consensus.api.gradle.plugin:0.3.0.
Searched in the following locations:
- https://plugins.gradle.org/m2/me/omico/consensus/api/me.omico.consensus.api.gradle.plugin/0.3.0/me.omico.consensus.api.gradle.plugin-0.3.0.pom
Required by:
project : > project :project

Possible solution:
- Declare repository providing the artifact, see the documentation at https://docs.gradle.org/current/userguide/declaring_repositories.html

好的,现在我们知道当我们什么仓库都不声明的时候 Gradle 会”自作主张“的查找自家的仓库,也就是 gradlePluginPortal()

那么,我们把这个 Gradle 插件所在的仓库添加进去会发生什么事情呢?

1
2
3
4
5
pluginManagement {
repositories {
maven(url = "https://maven.omico.me")
}
}

我们可以很惊奇的发现,这项目还是跑不通。这是为什么呢?

1
2
3
4
5
6
7
8
9
10
11
A problem occurred configuring root project 'gpi-root'.
> Could not determine the dependencies of null.
> Could not resolve all task dependencies for configuration ':classpath'.
> Could not find com.diffplug.spotless:com.diffplug.spotless.gradle.plugin:6.20.0.
Searched in the following locations:
- https://maven.omico.me/com/diffplug/spotless/com.diffplug.spotless.gradle.plugin/6.20.0/com.diffplug.spotless.gradle.plugin-6.20.0.pom
Required by:
project : > project :project

Possible solution:
- Declare repository providing the artifact, see the documentation at https://docs.gradle.org/current/userguide/declaring_repositories.html

看起来之前的 gradlePluginPortal() 不再被 Gradle ”自作主张“的添加了。 那么,我们它添加进去再看看。

哦,这次终于行了。

但这里还有一个坑需要注意哦,假如插件依赖的库在 gradlePluginPortal() 中不存在,比如说出现在 mavenCentral() 中。 那么你的项目还是会继续报错,直到你添加了 mavenCentral()

更好玩的事情来了,如果你使用到的 Gradle 插件只依赖 gradlePluginPortal()mavenCentral(),那么你什么都不需要声明项目依旧能正常运行。

这就是为什么我说 Sonatype 是不可替代的,因为在上述某些情况下,你可以少写几行 Gradle 配置代码。(笑~)

解决的办法

除去上面说的所谓的”不可替代“性,那么我们有什么办法快速解决这个问题呢?答案当然是有的!

我们先列举一下可选的方法,以及一些为什么我没用使用那种方法的理由。

使用本地仓库

本地仓库的优点是快,自己可以胡搞瞎搞。缺点是不方便,他人无法使用。这显然不符合我的需求,我是想把插件发布出去让他人使用的。

使用 JitPack

我仍然记得这个这个项目最早仅仅只是一个个人项目,并且出现过各种诡异的问题。这使我并不想像遇到 Déjà vu 一样再次走一遍老路。毫无疑问,这是从一个地狱逃离到另一个地狱。(谜之声:但你现在使用的方法就不会是地狱了吗?看来只能由时间来证明了。笑~)

使用 GitHub Packages

这个是现代的解决方案了。各方面都很完善,但是有一点阻止了我想要使用它 —— 它获取公开的依赖也需要访问令牌,这一点我非常不能接受,相信能整出这种操作就是为了收集数据。

使用 GitHub Pages

这个方法我最早在 rovo89 的 XposedBridge 中看到的。这个方法的优点是简单粗暴,你有100%的自主性,缺点是,想要用得舒服,你需要优化和规范发布流程,并且写一些小工具之类的。具体的实现细节,希望我能不咕咕,在以后发布的文章中我再细讲吧~

结语

说这说那,还是希望大家能再移步看看我在我的 Maven 仓库的 README 中所说的话吧~

总之,Sonatype Fuck You!

在使用 Spotless 的时候,我习惯直接在根路径运行 spotlessApply,在 IDEA 里运行的命令是 gradle spotlessApply。而我现在在项目中使用 composite build。我再在根目录运行 :composeiteBuild:spotlessApply 并不会格式化我 composite build 项目中的代码。表面原因是我刻意配置了 target("src/**/*.kt") 限定了格式化代码的范围使得格式化不生效,深层原因是 composite build 中子模块的 tasks 不会被根项目运行。于是如文档所说需要配置 tasks,让根项目的 task 依赖 composite build 的 tasks。但如文档中的配置方式会产生问题。What if 其中一个 composite build 并没有对应名称的 task 呢?于是,这就是这篇文章诞生的初衷。

首先我们先看看官方文档是如何配置的:

1
2
3
tasks.register("publishDeps") {
dependsOn(gradle.includedBuilds.map { it.task(":publishMavenPublicationToMavenRepository") })
}

这种方式的问题是,他们默认每个 composite build 都有一个 publishMavenPublicationToMavenRepository 的 task。但是,如果你的 composite build 没有这个 task,那么这个配置就会报错。

起初,我被官方的文档误导的很深。为什么是在父 build.gradle.kts 中配置复合构建的 task 呢。为何不反过来,就这么灵机一动下,我完美的解决了这个问题。

完整的配置如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
plugins {
id("com.diffplug.spotless")
}

allprojects {
apply<SpotlessPlugin>()
spotless {
kotlin {
target("src/**/*.kt")
ktlint()
}
}
}

subprojects {
rootProject.tasks {
spotlessApply.dependsOn(this@subprojects.tasks.spotlessApply)
spotlessCheck.dependsOn(this@subprojects.tasks.spotlessCheck)
}
}

这几天被 Intellij 语法无法高亮折磨得要死要活的我终于得到了解脱,回想起问题出现的时机,啊,没错,又是这该死的 buildSrc。

起因

出于对 Gradle 强制 Kotlin 版本的尊重,且我需要在 buildSrc 中引入 Kotlin Gradle plugin 的依赖。使用了 embeddedKotlin("gradle-plugin") 而不是 kotlin("gradle-plugin", "<version>")。于是整个项目持续出现 Syntax highlighting has been temporarily turned off in file xxx.kt because of an internal error,这极大的影响了我的开发效率。

发现与解决

在开发过程中,我发现 syncpublish 的操作都不会触发报错,而当我运行 build 的时候提示我二进制兼容存在问题,原因是用了不匹配的 Kotlin 版本。而这版本号引起了我的警觉。报错中出现的版本号竟然和 Gradle 内置的版本号一至,而非我在项目中使用的版本号。

经过一番代码搜索,我发现我只有在 buildSrc 中使用了它。于是我将其改为项目使用的 Kotlin 版本, Oh nice! 问题解决了。

正篇

说了那么多,不是说要丢掉 buildSrc 吗?

是的没错,接下来就说说为什么我们需要丢掉 buildSrc。

说实话,在 buildSrc 发布之初我是非常兴奋的。因为它相当于给项目打开了新世界的大门,使项目的 Gradle 脚本变得不再臃肿。同时我们也可以更方便的自定义 tasks。

但随着使用程度的深入,我发现 buildSrc 具有以下缺点:

  • 编译 buildSrc 的 tasks 都不支持缓存,这意味着每次修改 buildSrc 下的内容都需要重新全量编译。如果项目非常大的话,这将是一个非常痛苦的过程,大大影响了我们的开发效率。

  • 产生代码高亮问题(本篇开头所描述的)

但 buildSrc 的便捷性摆在那里,那么我们有什么解决办法吗?

答案是有的!!!

Composite build(复合构建)走上台前。

复合构建直接解决了上述提到的 buildSrc 的所有痛点。并且可以使我们开发出更多玩法。

如 Gradle 项目自身的源码就使用了在根项目(rootProject)和 build-logic 中都将 build-logic-commons 以复合构构建的形式引入。详情可以自行查看 Gradle 的源码。

结语

关于复合构建其实很多东西没有细说,因为我认为其实代码比起文章,更容易描述出我们实际应该如何运用。感兴趣的话可以参考我列出的以下几个项目来深入进行学习:

入门级:
https://github.com/Omico/gradle-issue-missing-kotlin-dsl-in-settings
https://github.com/square/wire

稍微复杂:
https://github.com/android/nowinandroid
https://github.com/Omico/age
https://github.com/Omico/Gradm

大师级:
https://github.com/gradle/gradle

许久没有写文章了,适逢新买的 MacBook 也需要自己编译一份,于是今天就来水一篇关于编译 Telegram Desktop 的文章。

此文章主要用于个人记录。

Windows 端

环境搭建

下载并安装以下软件:

  • Visual Studio 2022

    1
    winget install Microsoft.VisualStudio.2022.Community

指定一个文件夹用于存放 Telegram Desktop 的源码,例如:D:\TelegramDesktopBuild,在此创建并运行一个 Powershell 的脚本,脚本内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
Set-Location $PSScriptRoot

New-Item -ItemType Directory -Force -Path "Libraries" > $null
New-Item -ItemType Directory -Force -Path "ThirdParty" > $null

function Install-PackageIfNotExists($Executable, $PackageName) {
if (-Not(Get-Command $Executable -errorAction SilentlyContinue)) {
winget install $PackageName
}
}

Install-PackageIfNotExists "cmake" "Kitware.CMake"
Install-PackageIfNotExists "git" "Git.Git"
Install-PackageIfNotExists "python" "Python.Python.3.9"

pip install --upgrade pywin32

if (Test-Path -Path "tdesktop") {
Set-Location "tdesktop"
git fetch --all
Set-Location ..
}
else {
git clone --recursive https://github.com/telegramdesktop/tdesktop.git
}

下载完源码后,我们还可以根据自己需求修改编译线程数,进入 tdesktop\Telegram\build\prepare\prepare.py 文件:

1
2
3
4
5
...
environment = {
- 'MAKE_THREADS_CNT': '-j8',
+ 'MAKE_THREADS_CNT': '-j32',
...

Prepare libraries

Run tdesktop\Telegram\build\prepare\win.bat via x64 Native Tools Command Prompt for VS 2022.bat in current path.

Build the project

Same as the official document.

删除之前生成的文件。

1
rm -r tdesktop\out

自己指定或使用测试用 API ID 和 API hash:

1
2
3
4
5
6
cd tdesktop\Telegram
.\configure.bat x64 ^
-D TDESKTOP_API_ID=YOUR_API_ID ^
-D TDESKTOP_API_HASH=YOUR_API_HASH ^
-D DESKTOP_APP_USE_PACKAGED=OFF ^
-D DESKTOP_APP_DISABLE_CRASH_REPORTS=OFF

or

1
2
3
4
5
cd tdesktop\Telegram
.\configure.bat x64 ^
-D TDESKTOP_API_TEST=ON ^
-D DESKTOP_APP_USE_PACKAGED=OFF ^
-D DESKTOP_APP_DISABLE_CRASH_REPORTS=OFF

之后的部分遵循官方文档

macOS 端

环境搭建

下载以下软件:

Build the project

遵循官方文档.

前天睡不着,脑子里在想在 Android 上 Chrome 是怎么获取网站的 favicon 的。于是,直接爬起来下载了一份适用于 Android 的 Chromium 源码,准备扒开看看。

下完后,按照个人习惯,先跑一遍编译,可是在编译的过程中遇到了些小坑,遂有了这篇水文。

为了这篇文章,我不得不拾起我将近一年没有维护的博客,重建博客也花了不少时间。

阅读全文 »

已经很久没有维护博客了,打算把昨天睡不着后爬起来的下了一份 Chromium Android 的源码编译的过程水出一篇博文,结果发现之前用的主题已经停止维护了。

在众多主题中一眼相中了 NexT

花了些时间不断微调配置,总算是好了。

还在考虑之前的博文要不要删掉,感觉文笔实在差劲。。。现在也是

0%