format stuff again

This commit is contained in:
Ivan Dimitrov 2023-11-20 20:29:43 +02:00
parent 441224af7b
commit c178ca98e7
5 changed files with 69 additions and 87 deletions

View File

@ -6,30 +6,25 @@ date: 21 Sep 2023
z: 7
---
[LUKS](https://en.wikipedia.org/wiki/Linux_Unified_Key_Setup) is an encryption specifications for Linux used to encrypt
disk partitions. The [cryptsetup](https://man.archlinux.org/man/cryptsetup.8.en) utility is usually used for that. After
a partition is encrypted it can be opened for reading and writing after inputting a password or a keyfile.
[LUKS](https://en.wikipedia.org/wiki/Linux_Unified_Key_Setup) is an encryption specifications for Linux used to encrypt disk partitions. The
[cryptsetup](https://man.archlinux.org/man/cryptsetup.8.en) utility is usually used for that. After a partition is encrypted it can be opened for reading and writing after
inputting a password or a keyfile.
### Technical details
> cryptsetup is used to conveniently setup dm-crypt managed device-mapper mappings. These include plain dm-crypt volumes
> and LUKS volumes. The difference is that LUKS uses a metadata header and can hence offer more features than plain
> dm-crypt. On the other hand, the header is visible and vulnerable to damage.
> cryptsetup is used to conveniently setup dm-crypt managed device-mapper mappings. These include plain dm-crypt volumes and LUKS volumes. The difference is that LUKS uses a
> metadata header and can hence offer more features than plain dm-crypt. On the other hand, the header is visible and vulnerable to damage.
So after a partition is encrypted it has a LUKS header with some encryption metadata and a body. The header tells the
program (cryptsetup) how to decrypt the partition. If that header is damaged in any way then trying to decrypt it using
`cryptsetup luksOpen /dev/sdx1` will print `Device /dev/sdx1 is not a valid LUKS device.` if the system is up-to-date.
On the server this happened the system was CentOS 7 with cryptsetup version 2.0.3 (as opposed to 2.6.1) so when I tried
to decrypt it didn't prompt for a password and didn't print anything. After upgrading the version following
[this gitlab issue](https://gitlab.com/cryptsetup/cryptsetup/-/issues/783) I got it to print that message so I had
something to google.
So after a partition is encrypted it has a LUKS header with some encryption metadata and a body. The header tells the program (cryptsetup) how to decrypt the partition. If that
header is damaged in any way then trying to decrypt it using `cryptsetup luksOpen /dev/sdx1` will print `Device /dev/sdx1 is not a valid LUKS device.` if the system is up-to-date.
On the server this happened the system was CentOS 7 with cryptsetup version 2.0.3 (as opposed to 2.6.1) so when I tried to decrypt it didn't prompt for a password and didn't print
anything. After upgrading the version following [this gitlab issue](https://gitlab.com/cryptsetup/cryptsetup/-/issues/783) I got it to print that message so I had something to
google.
> Please test with last released and supported version (currently 2.5.0), we do not have resources to debug old
> versions, thanks.
> Please test with last released and supported version (currently 2.5.0), we do not have resources to debug old versions, thanks.
A good bit of googling led me to [this thread](https://bbs.archlinux.org/viewtopic.php?id=284768) on the Arch Linux
forums. They describe the steps needed to diagnose most LUKS problems. One thing that was different in this case was the
command `sudo dd if=/dev/sdx1 count=20 | hexdump -C` printed only zeroes.
A good bit of googling led me to [this thread](https://bbs.archlinux.org/viewtopic.php?id=284768) on the Arch Linux forums. They describe the steps needed to diagnose most LUKS
problems. One thing that was different in this case was the command `sudo dd if=/dev/sdx1 count=20 | hexdump -C` printed only zeroes.
```bash
dd if=/dev/sdx1 count=20 | hexdump -C
@ -40,9 +35,8 @@ dd if=/dev/sdx1 count=20 | hexdump -C
10240 bytes (10 kB, 10 KiB) copied, 0.00229011 s, 4.5 MB/s
```
Testing with a larger block count `count=2050` showed that the first 2030 or so blocks were completely wiped. This meant
that the LUKS header and possibly some of the data are gone. This could still be fixed with a header backup file using
`cryptsetup luksHeaderRestore <device> --header-backup-file <file>`.
Testing with a larger block count `count=2050` showed that the first 2030 or so blocks were completely wiped. This meant that the LUKS header and possibly some of the data are
gone. This could still be fixed with a header backup file using `cryptsetup luksHeaderRestore <device> --header-backup-file <file>`.
Unfortunately, there was no header backup file so the only solution was to restore a backup of the entire partition.

View File

@ -7,37 +7,32 @@ z: 2
draft: false
---
> parcelLab is the only truly global enterprise post-purchase software provider, enabling brands to increase top-line
> revenue, decrease operational costs, and optimize the customer experience.
> parcelLab is the only truly global enterprise post-purchase software provider, enabling brands to increase top-line revenue, decrease operational costs, and optimize the customer
> experience.
[Parcel Lab](https://parcellab.com/)
Parcel lab takes care of the post-purchase operations like order tracking, email notifications, delivery status updates,
data processing and more so that businesses don't have to.
Parcel lab takes care of the post-purchase operations like order tracking, email notifications, delivery status updates, data processing and more so that businesses don't have to.
---
### Technical overview
This integration is straightforward thanks to the [amazing documentation](https://how.parcellab.works/docs/) provided by
the Parcel Lab team.
This integration is straightforward thanks to the [amazing documentation](https://how.parcellab.works/docs/) provided by the Parcel Lab team.
You really want to use the API even though there's more options to submit data to their platform.
The data model is based on the [tracking](https://how.parcellab.works/docs/onboarding/data-model) - a data object having
four fields for the delivery information. An order is composed of one or more trackings.
The data model is based on the [tracking](https://how.parcellab.works/docs/onboarding/data-model) - a data object having four fields for the delivery information. An order is
composed of one or more trackings.
Once data is submitted, the platform starts an automated process where it groups the new trackings to their respective
orders and starts listening for events like "dispatch", "payment received" etc. to run custom actions. Each business can
configure these events and actions so that they best match their operations. For example an "order created" event could
notify the customer that the order has started as well as deal with some other business logic in the background.
Once data is submitted, the platform starts an automated process where it groups the new trackings to their respective orders and starts listening for events like "dispatch",
"payment received" etc. to run custom actions. Each business can configure these events and actions so that they best match their operations. For example an "order created" event
could notify the customer that the order has started as well as deal with some other business logic in the background.
Their [order status page](https://how.parcellab.works/docs/track-and-communicate/order-status-page) is a convenient
script that you can configure for your website. The script reads the URL to find an order number so that it can fetch
the most up-to-date information for that order and display it in an iFrame.
Their [order status page](https://how.parcellab.works/docs/track-and-communicate/order-status-page) is a convenient script that you can configure for your website. The script reads
the URL to find an order number so that it can fetch the most up-to-date information for that order and display it in an iFrame.
This system allows for a seamless, declarative event-based integration where the business takes care of the data and
events (and sales) and parcelLab takes care of the rest.
This system allows for a seamless, declarative event-based integration where the business takes care of the data and events (and sales) and parcelLab takes care of the rest.
---
@ -197,5 +192,4 @@ All this can be viewed on the tracking page embedded anywhere.
<script async onload="plTrackAndTraceStart()" src="https://cdn.parcellab.com/js/v5/main.min.js"></script>
```
This shows a nice UI that can be
[customized](https://how.parcellab.works/docs/track-and-communicate/order-status-page/configuration#additional-options).
This shows a nice UI that can be [customized](https://how.parcellab.works/docs/track-and-communicate/order-status-page/configuration#additional-options).

View File

@ -6,8 +6,7 @@ date: Jul 29, 2023 - Nov 5, 2023
z: 3
---
This project aims to be a Google Drive frontend. It uses the Google APIs to fetch document data and display that data in
a wiki-style web page.
This project aims to be a Google Drive frontend. It uses the Google APIs to fetch document data and display that data in a wiki-style web page.
### [Demo page](https://ivan.stepsy.wiki/space/spc)
@ -21,37 +20,31 @@ It supports Google Docs, Google Sheets, Google Slides, PDFs and regular files.
### Technical overview
I chose NextJS as the backbone for this project as it offers the greatest amount of flexibility while still being very
powerful both on the client as well as on the server with an active community and thriving ecosystem.
I chose NextJS as the backbone for this project as it offers the greatest amount of flexibility while still being very powerful both on the client as well as on the server with an
active community and thriving ecosystem.
For styles I chose TailwindCSS with DaisyUI for the optimizations and development speed that come out of using them.
Tailwind uses purgecss to minimize the final bundle making the page load and feel faster.
For styles I chose TailwindCSS with DaisyUI for the optimizations and development speed that come out of using them. Tailwind uses purgecss to minimize the final bundle making the
page load and feel faster.
The database is PostgreSQL with Prisma ORM running on Vercel's cloud infrastructure.
For authentication I chose NextAuth with JWT as it's the preferred way to handle auth in a NextJS project.
The actual implementation is a lengthy process involving many moving parts and lots of code. I'll go over the three most
challenging problems in no particular order.
The actual implementation is a lengthy process involving many moving parts and lots of code. I'll go over the three most challenging problems in no particular order.
Interfacing with Google Drive is done to read the content there and almost never used for writing except for setting and
removing permissions. To read the content the reader must have appropriate permissions and that's determined by the auth
system with a JWT. For each request we can get the JWT and use it in the google client to auth unless it's an anonymous
user, in which case we must use a google service account JWT. This JWT holds a google client access token used by google
in determining permissions. Once the client is set up we can start making drive requests on behalf of the user getting
their drive content inside the web app including folders, files, documents, pictures, shared drives and so on, which can
later be rendered on a page. These requests are a bottleneck, which required many optimizations and concurrency tricks
to make the site considerably faster than the competition.
Interfacing with Google Drive is done to read the content there and almost never used for writing except for setting and removing permissions. To read the content the reader must
have appropriate permissions and that's determined by the auth system with a JWT. For each request we can get the JWT and use it in the google client to auth unless it's an
anonymous user, in which case we must use a google service account JWT. This JWT holds a google client access token used by google in determining permissions. Once the client is
set up we can start making drive requests on behalf of the user getting their drive content inside the web app including folders, files, documents, pictures, shared drives and so
on, which can later be rendered on a page. These requests are a bottleneck, which required many optimizations and concurrency tricks to make the site considerably faster than the
competition.
The storage API uses Prisma ORM for storing and getting all the user info including wikis and spaces. When a user logs
in they can see their wiki as well as all the wikis they are allowed to manage. It's used to handle authorized requests
like changing the wiki/space name, url, permissions and more. Storage is an integral part of any web application.
The storage API uses Prisma ORM for storing and getting all the user info including wikis and spaces. When a user logs in they can see their wiki as well as all the wikis they are
allowed to manage. It's used to handle authorized requests like changing the wiki/space name, url, permissions and more. Storage is an integral part of any web application.
The UI/UX uses TailwindCSS and DaisyUI to make everything a fast, modern, optimized and intuitive experience with extra
features like dozens of themes as well as a custom theme builder. React was used with TypeScript to provide a nice
modern client-side experience between transitions and interactions. This setup supports maximum optimization as you can
see in the screenshots below allowing the app to reach a lighthouse score of 100 on all but one (it has 99) pages. Both
mobile and desktop is supported.
The UI/UX uses TailwindCSS and DaisyUI to make everything a fast, modern, optimized and intuitive experience with extra features like dozens of themes as well as a custom theme
builder. React was used with TypeScript to provide a nice modern client-side experience between transitions and interactions. This setup supports maximum optimization as you can
see in the screenshots below allowing the app to reach a lighthouse score of 100 on all but one (it has 99) pages. Both mobile and desktop is supported.
---

View File

@ -7,27 +7,24 @@ z: 1
draft: false
---
[Wells Fargo](https://www.wellsfargo.com/) is a US based international financial institution operating in 35 countries
and serving over 70 million people worldwide. [Source](https://en.wikipedia.org/wiki/Wells_Fargo)
[Wells Fargo](https://www.wellsfargo.com/) is a US based international financial institution operating in 35 countries and serving over 70 million people worldwide.
[Source](https://en.wikipedia.org/wiki/Wells_Fargo)
They provide an [Open Banking API](https://en.wikipedia.org/wiki/Open_banking) for usage with custom-made business
credit cards like the [Watches of Switzerland credit card](https://www.watchesofswitzerland.com/wos-credit-card).
They provide an [Open Banking API](https://en.wikipedia.org/wiki/Open_banking) for usage with custom-made business credit cards like the
[Watches of Switzerland credit card](https://www.watchesofswitzerland.com/wos-credit-card).
---
### Technical overview
Integrating Open Banking APIs requires many security and legal precautions. There is always a double layer of encryption
for all APIs and communications (even emails).
Integrating Open Banking APIs requires many security and legal precautions. There is always a double layer of encryption for all APIs and communications (even emails).
Many of the specifications and examples are proprietary or lost in the
[mountains of documentation provided by the bank](https://developer.wellsfargo.com/guides/user-guides/open-banking-europe-api-integration/obei).
For that reason I will not go into too much detail about the use cases as I'm not sure what I am allowed to talk about.
[mountains of documentation provided by the bank](https://developer.wellsfargo.com/guides/user-guides/open-banking-europe-api-integration/obei). For that reason I will not go into
too much detail about the use cases as I'm not sure what I am allowed to talk about.
One use case documented on their website is the API Keys endpoint.
To generate an API key you need your client credentials with a key and a secret in this format
`Authorization: Basic base64(consumerKey:consumerSecret)` as well as the scope in the form
`grant_type=client_credentials&scope=accounts`. There are hundreds of scopes to configure. This gives you an
`access_token` which is valid for 24 hours, has the scopes (permissions) you requested and is used for most API
communications.
To generate an API key you need your client credentials with a key and a secret in this format `Authorization: Basic base64(consumerKey:consumerSecret)` as well as the scope in the
form `grant_type=client_credentials&scope=accounts`. There are hundreds of scopes to configure. This gives you an `access_token` which is valid for 24 hours, has the scopes
(permissions) you requested and is used for most API communications.

24
new.ts
View File

@ -1,18 +1,22 @@
import { baseDir, getAllContent } from "$lib/content";
import fs from "fs"
import fs from "fs";
const args = process.argv.slice(2)
const args = process.argv.slice(2);
const path = args[0]
const path = args[0];
if (!path) {
throw new Error("Path is needed!")
throw new Error("Path is needed!");
}
const slug = path.split("/");
const t = slug[slug.length - 1]
const t = slug[slug.length - 1];
const nextZ = Math.max.apply(Math, getAllContent().map(c => Number(c.data.z))) + 1
const nextZ =
Math.max.apply(
Math,
getAllContent().map(c => Number(c.data.z)),
) + 1;
const meta = (title: string = t, goal: string = "", role: string = "", date: string = "", z: number = nextZ) => `---
title: ${title}
@ -22,11 +26,11 @@ date: ${date}
z: ${z}
draft: true
---
`
`;
const filePath = `${baseDir}${path}.md`
const filePath = `${baseDir}${path}.md`;
if (fs.existsSync(filePath)) {
throw new Error("File already exists!")
throw new Error("File already exists!");
}
fs.writeFileSync(filePath, meta(), { flag: "w+" })
fs.writeFileSync(filePath, meta(), { flag: "w+" });