format the content

This commit is contained in:
Ivan Dimitrov 2023-11-19 10:36:10 +02:00
parent 4cb2cddcef
commit efadf21af1
5 changed files with 100 additions and 89 deletions

View File

@ -6,26 +6,30 @@ date: 21 Sep 2023
z: 7
---
[LUKS](https://en.wikipedia.org/wiki/Linux_Unified_Key_Setup) is an encryption specifications for Linux used to encrypt disk partitions.
The [cryptsetup](https://man.archlinux.org/man/cryptsetup.8.en) utility is usually used for that.
After a partition is encrypted it can be opened for reading and writing after inputting a password or a keyfile.
[LUKS](https://en.wikipedia.org/wiki/Linux_Unified_Key_Setup) is an encryption specifications for Linux used to encrypt
disk partitions. The [cryptsetup](https://man.archlinux.org/man/cryptsetup.8.en) utility is usually used for that. After
a partition is encrypted it can be opened for reading and writing after inputting a password or a keyfile.
### Technical details
> cryptsetup is used to conveniently setup dm-crypt managed device-mapper mappings. These include plain dm-crypt volumes and LUKS volumes. The difference is that LUKS uses a metadata header and can hence offer more features than plain dm-crypt. On the other hand, the header is visible and vulnerable to damage.
> cryptsetup is used to conveniently setup dm-crypt managed device-mapper mappings. These include plain dm-crypt volumes
> and LUKS volumes. The difference is that LUKS uses a metadata header and can hence offer more features than plain
> dm-crypt. On the other hand, the header is visible and vulnerable to damage.
So after a partition is encrypted it has a LUKS header with some encryption metadata and a body. The header tells the
program (cryptsetup) how to decrypt the partition. If that header is damaged in any way then trying to decrypt it using
`cryptsetup luksOpen /dev/sdx1` will print `Device /dev/sdx1 is not a valid LUKS device.` if the system is up-to-date.
On the server this happened the system was CentOS 7 with cryptsetup version 2.0.3 (as opposed to 2.6.1) so when I tried
to decrypt it didn't prompt for a password and didn't print anything. After upgrading the version following
[this gitlab issue](https://gitlab.com/cryptsetup/cryptsetup/-/issues/783) I got it to print that message so I had
something to google.
So after a partition is encrypted it has a LUKS header with some encryption metadata and a body.
The header tells the program (cryptsetup) how to decrypt the partition.
If that header is damaged in any way then trying to decrypt it using `cryptsetup luksOpen /dev/sdx1` will print `Device /dev/sdx1 is not a valid LUKS device.` if the system is up-to-date.
On the server this happened the system was CentOS 7 with cryptsetup version 2.0.3 (as opposed to 2.6.1) so when I tried to decrypt it didn't prompt for a password and didn't print anything.
After upgrading the version following [this gitlab issue](https://gitlab.com/cryptsetup/cryptsetup/-/issues/783) I got it to print that message so I had something to google.
> Please test with last released and supported version (currently 2.5.0), we do not have resources to debug old
> versions, thanks.
> Please test with last released and supported version (currently 2.5.0), we do not have resources to debug old versions, thanks.
A good bit of googling led me to [this thread](https://bbs.archlinux.org/viewtopic.php?id=284768) on the Arch Linux forums.
They describe the steps needed to diagnose most LUKS problems.
One thing that was different in this case was the command `sudo dd if=/dev/sdx1 count=20 | hexdump -C` printed only zeroes.
A good bit of googling led me to [this thread](https://bbs.archlinux.org/viewtopic.php?id=284768) on the Arch Linux
forums. They describe the steps needed to diagnose most LUKS problems. One thing that was different in this case was the
command `sudo dd if=/dev/sdx1 count=20 | hexdump -C` printed only zeroes.
```bash
dd if=/dev/sdx1 count=20 | hexdump -C
@ -36,8 +40,9 @@ dd if=/dev/sdx1 count=20 | hexdump -C
10240 bytes (10 kB, 10 KiB) copied, 0.00229011 s, 4.5 MB/s
```
Testing with a larger block count `count=2050` showed that the first 2030 or so blocks were completely wiped.
This meant that the LUKS header and possibly some of the data are gone. This could still be fixed with a header backup file using `cryptsetup luksHeaderRestore <device> --header-backup-file <file>`.
Testing with a larger block count `count=2050` showed that the first 2030 or so blocks were completely wiped. This meant
that the LUKS header and possibly some of the data are gone. This could still be fixed with a header backup file using
`cryptsetup luksHeaderRestore <device> --header-backup-file <file>`.
Unfortunately, there was no header backup file so the only solution was to restore a backup of the entire partition.

View File

@ -7,33 +7,37 @@ z: 2
draft: false
---
> parcelLab is the only truly global enterprise post-purchase software provider, enabling brands to increase top-line revenue, decrease operational costs, and optimize the customer experience.
> parcelLab is the only truly global enterprise post-purchase software provider, enabling brands to increase top-line
> revenue, decrease operational costs, and optimize the customer experience.
[Parcel Lab](https://parcellab.com/)
Parcel lab takes care of the post-purchase operations like order tracking, email notifications, delivery status updates, data processing and more so that businesses don't have to.
Parcel lab takes care of the post-purchase operations like order tracking, email notifications, delivery status updates,
data processing and more so that businesses don't have to.
---
### Technical overview
This integration is straightforward thanks to the [amazing documentation](https://how.parcellab.works/docs/) provided by the Parcel Lab team.
This integration is straightforward thanks to the [amazing documentation](https://how.parcellab.works/docs/) provided by
the Parcel Lab team.
You really want to use the API even though there's more options to submit data to their platform.
The data model is based on the [tracking](https://how.parcellab.works/docs/onboarding/data-model) - a data object having four fields for the delivery information. An order is composed of one or more trackings.
The data model is based on the [tracking](https://how.parcellab.works/docs/onboarding/data-model) - a data object having
four fields for the delivery information. An order is composed of one or more trackings.
Once data is submitted, the platform starts an automated process where it groups the new trackings to their respective orders and starts listening for events like
"dispatch", "payment received" etc. to run custom actions. Each business can configure these events and actions so that they best match their operations.
For example an "order created" event could notify the customer that the order has started as well as deal with some other business logic in the background.
Once data is submitted, the platform starts an automated process where it groups the new trackings to their respective
orders and starts listening for events like "dispatch", "payment received" etc. to run custom actions. Each business can
configure these events and actions so that they best match their operations. For example an "order created" event could
notify the customer that the order has started as well as deal with some other business logic in the background.
Their [order status page](https://how.parcellab.works/docs/track-and-communicate/order-status-page) is a convenient script that you can configure for your website.
The script reads the URL to find an order number so that it can fetch the most up-to-date information for that order and display it in an iFrame.
Their [order status page](https://how.parcellab.works/docs/track-and-communicate/order-status-page) is a convenient
script that you can configure for your website. The script reads the URL to find an order number so that it can fetch
the most up-to-date information for that order and display it in an iFrame.
This system allows for a seamless, declarative event-based integration where the business takes care of the data and events (and sales) and parcelLab takes care of the rest.
This system allows for a seamless, declarative event-based integration where the business takes care of the data and
events (and sales) and parcelLab takes care of the rest.
---
@ -176,23 +180,22 @@ All this can be viewed on the tracking page embedded anywhere.
```html
<div id="parcellab-track-and-trace">
<img src="https://cdn.parcellab.com/img/loading-spinner-1.gif" alt="loading" />
<img src="https://cdn.parcellab.com/img/loading-spinner-1.gif" alt="loading" />
</div>
<script>
function plTrackAndTraceStart() {
window.parcelLabTrackAndTrace.initialize({
plUserId: TYPE_YOUR_USER_ID_HERE
});
var linkTag = document.createElement('link');
linkTag.rel = 'stylesheet';
linkTag.href = 'https://cdn.parcellab.com/css/v5/main.min.css';
document.getElementsByTagName('head')[0].appendChild(linkTag);
window.parcelLabTrackAndTrace.initialize({
plUserId: TYPE_YOUR_USER_ID_HERE,
});
var linkTag = document.createElement("link");
linkTag.rel = "stylesheet";
linkTag.href = "https://cdn.parcellab.com/css/v5/main.min.css";
document.getElementsByTagName("head")[0].appendChild(linkTag);
}
</script>
<script async onload="plTrackAndTraceStart()"
src="https://cdn.parcellab.com/js/v5/main.min.js"></script>
<script async onload="plTrackAndTraceStart()" src="https://cdn.parcellab.com/js/v5/main.min.js"></script>
```
This shows a nice UI that can be [customized](https://how.parcellab.works/docs/track-and-communicate/order-status-page/configuration#additional-options).
This shows a nice UI that can be
[customized](https://how.parcellab.works/docs/track-and-communicate/order-status-page/configuration#additional-options).

View File

@ -6,50 +6,55 @@ date: Jul 29, 2023 - Nov 5, 2023
z: 3
---
This project aims to be a Google Drive frontend. It uses the Google APIs to fetch document data and display that data in a wiki-style web page.
This project aims to be a Google Drive frontend. It uses the Google APIs to fetch document data and display that data in
a wiki-style web page.
### [Demo page](https://ivan.stepsy.wiki/space/spc)
(website not live yet.. waiting for client)
![thumbnail](/thumbnail.png)
It supports Google Docs, Google Sheets, Google Slides, PDFs and regular files.
---
### Technical overview
I chose NextJS as the backbone for this project as it offers the greatest amount of flexibility while still being very powerful both on the client as well as on the server with an active community and thriving ecosystem.
I chose NextJS as the backbone for this project as it offers the greatest amount of flexibility while still being very
powerful both on the client as well as on the server with an active community and thriving ecosystem.
For styles I chose TailwindCSS with DaisyUI for the optimizations and development speed that come out of using them. Tailwind uses purgecss to minimize the final bundle making the page load and feel faster.
For styles I chose TailwindCSS with DaisyUI for the optimizations and development speed that come out of using them.
Tailwind uses purgecss to minimize the final bundle making the page load and feel faster.
The database is PostgreSQL with Prisma ORM running on Vercel's cloud infrastructure.
For authentication I chose NextAuth with JWT as it's the preferred way to handle auth in a NextJS project.
The actual implementation is a lengthy process involving many moving parts and lots of code. I'll go over the three most
challenging problems in no particular order.
The actual implementation is a lengthy process involving many moving parts and lots of code. I'll go over the three most challenging problems in no particular order.
Interfacing with Google Drive is done to read the content there and almost never used for writing except for setting and
removing permissions. To read the content the reader must have appropriate permissions and that's determined by the auth
system with a JWT. For each request we can get the JWT and use it in the google client to auth unless it's an anonymous
user, in which case we must use a google service account JWT. This JWT holds a google client access token used by google
in determining permissions. Once the client is set up we can start making drive requests on behalf of the user getting
their drive content inside the web app including folders, files, documents, pictures, shared drives and so on, which can
later be rendered on a page. These requests are a bottleneck, which required many optimizations and concurrency tricks
to make the site considerably faster than the competition.
Interfacing with Google Drive is done to read the content there and almost never used for writing except for setting and removing permissions. To read the content the reader must have appropriate permissions and that's determined by the auth system with a JWT.
For each request we can get the JWT and use it in the google client to auth unless it's an anonymous user, in which case we must use a google service account JWT. This JWT holds a google client access token used by google in determining permissions.
Once the client is set up we can start making drive requests on behalf of the user getting their drive content inside the web app including folders, files, documents, pictures, shared drives and so on, which can later be rendered on a page.
These requests are a bottleneck, which required many optimizations and concurrency tricks to make the site considerably faster than the competition.
The storage API uses Prisma ORM for storing and getting all the user info including wikis and spaces. When a user logs in they can see their wiki as well as all the wikis they are allowed to manage. It's used to handle authorized requests like changing the wiki/space name, url, permissions and more. Storage is an integral part of any web application.
The UI/UX uses TailwindCSS and DaisyUI to make everything a fast, modern, optimized and intuitive experience with extra features like dozens of themes as well as a custom theme builder.
React was used with TypeScript to provide a nice modern client-side experience between transitions and interactions.
This setup supports maximum optimization as you can see in the screenshots below allowing the app to reach a lighthouse score of 100 on all but one (it has 99) pages.
Both mobile and desktop is supported.
The storage API uses Prisma ORM for storing and getting all the user info including wikis and spaces. When a user logs
in they can see their wiki as well as all the wikis they are allowed to manage. It's used to handle authorized requests
like changing the wiki/space name, url, permissions and more. Storage is an integral part of any web application.
The UI/UX uses TailwindCSS and DaisyUI to make everything a fast, modern, optimized and intuitive experience with extra
features like dozens of themes as well as a custom theme builder. React was used with TypeScript to provide a nice
modern client-side experience between transitions and interactions. This setup supports maximum optimization as you can
see in the screenshots below allowing the app to reach a lighthouse score of 100 on all but one (it has 99) pages. Both
mobile and desktop is supported.
---
### Google API details
Configure NextAuth for Google:
@ -76,10 +81,7 @@ export default NextAuth({
Create an auth client for logged in users
```ts
let authClient = new google.auth.OAuth2(
process.env.GOOGLE_CLIENT_ID,
process.env.GOOGLE_CLIENT_SECRET,
);
let authClient = new google.auth.OAuth2(process.env.GOOGLE_CLIENT_ID, process.env.GOOGLE_CLIENT_SECRET);
authClient.setCredentials({
access_token: accessToken, // this comes from the logged in user info
refresh_token: refreshToken, // same for this
@ -100,35 +102,36 @@ Create the drive client
```ts
const drive = google.drive({
version: "v3",
auth: authClient,
version: "v3",
auth: authClient,
});
```
You can now use this client to query the API
```ts
const file = (await drive.files.get({fileId})).data;
const file = (await drive.files.get({ fileId })).data;
```
```ts
const folderContents = (await drive.files.list({ q: `'${folderId}' in parents` }))
.data.files;
const folderContents = (await drive.files.list({ q: `'${folderId}' in parents` })).data.files;
```
```ts
const googleDocHtml = (await drive.files.export({
fileId: googleDocId,
mimeType: "text/html",
})).data;
const googleDocHtml = (
await drive.files.export({
fileId: googleDocId,
mimeType: "text/html",
})
).data;
```
```ts
const shortcutTarget = await drive.files.get({
fileId,
fields: "shortcutDetails/targetId",
fileId,
fields: "shortcutDetails/targetId",
});
const targetId = shortcutTarget.data.shortcutDetails?.targetId
const targetId = shortcutTarget.data.shortcutDetails?.targetId;
```
Google doesn't export everything to HTML. They provide document renderers as iFrames.
@ -141,4 +144,3 @@ Google doesn't export everything to HTML. They provide document renderers as iFr
// This is used for PDFs or regular text files
<iframe src={`https://drive.google.com/file/d/${docId}/preview`}></iframe>
```

View File

@ -7,25 +7,27 @@ z: 1
draft: false
---
[Wells Fargo](https://www.wellsfargo.com/) is a US based international financial institution operating in 35 countries and serving over 70 million people worldwide. [Source](https://en.wikipedia.org/wiki/Wells_Fargo)
[Wells Fargo](https://www.wellsfargo.com/) is a US based international financial institution operating in 35 countries
and serving over 70 million people worldwide. [Source](https://en.wikipedia.org/wiki/Wells_Fargo)
They provide an [Open Banking API](https://en.wikipedia.org/wiki/Open_banking) for usage with custom-made business credit cards like the
[Watches of Switzerland credit card](https://www.watchesofswitzerland.com/wos-credit-card).
They provide an [Open Banking API](https://en.wikipedia.org/wiki/Open_banking) for usage with custom-made business
credit cards like the [Watches of Switzerland credit card](https://www.watchesofswitzerland.com/wos-credit-card).
---
### Technical overview
Integrating Open Banking APIs requires many security and legal precautions. There is always a double layer of encryption
for all APIs and communications (even emails).
Integrating Open Banking APIs requires many security and legal precautions. There is always a double layer of encryption for all APIs and communications (even emails).
Many of the specifications and examples are proprietary or lost in the [mountains of documentation provided by the bank](https://developer.wellsfargo.com/guides/user-guides/open-banking-europe-api-integration/obei).
Many of the specifications and examples are proprietary or lost in the
[mountains of documentation provided by the bank](https://developer.wellsfargo.com/guides/user-guides/open-banking-europe-api-integration/obei).
For that reason I will not go into too much detail about the use cases as I'm not sure what I am allowed to talk about.
One use case documented on their website is the API Keys endpoint.
To generate an API key you need your client credentials with a key and a secret in this format `Authorization: Basic base64(consumerKey:consumerSecret)`
as well as the scope in the form `grant_type=client_credentials&scope=accounts`. There are hundreds of scopes to configure.
This gives you an `access_token` which is valid for 24 hours, has the scopes (permissions) you requested and is used for most API communications.
To generate an API key you need your client credentials with a key and a secret in this format
`Authorization: Basic base64(consumerKey:consumerSecret)` as well as the scope in the form
`grant_type=client_credentials&scope=accounts`. There are hundreds of scopes to configure. This gives you an
`access_token` which is valid for 24 hours, has the scopes (permissions) you requested and is used for most API
communications.

View File

@ -10,4 +10,3 @@ draft: true
Repo: [github.com](https://github.com/hearts-of-iron-2/wiki)
This project aims to revamp an old, unmaintained wiki website and bring it back to life.