Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

# ![@databricksdaily Avatar](https://lunarcrush.com/gi/w:26/cr:twitter::1970714562920857600.png) @databricksdaily databricksdaily

databricksdaily posts on X about databricks, listing, make a, number of the most. They currently have XX followers and XX posts still getting attention that total XXX engagements in the last XX hours.

### Engagements: XXX [#](/creator/twitter::1970714562920857600/interactions)
![Engagements Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1970714562920857600/c:line/m:interactions.svg)

- X Week XXX +241%

### Mentions: XX [#](/creator/twitter::1970714562920857600/posts_active)
![Mentions Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1970714562920857600/c:line/m:posts_active.svg)

- X Week XX +533%

### Followers: XX [#](/creator/twitter::1970714562920857600/followers)
![Followers Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1970714562920857600/c:line/m:followers.svg)

- X Week XX +27%

### CreatorRank: XXXXXXXXX [#](/creator/twitter::1970714562920857600/influencer_rank)
![CreatorRank Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1970714562920857600/c:line/m:influencer_rank.svg)

### Social Influence [#](/creator/twitter::1970714562920857600/influence)
---

**Social category influence**
[technology brands](/list/technology-brands) 

**Social topic influence**
[databricks](/topic/databricks) #41, [listing](/topic/listing) #1686, [make a](/topic/make-a), [number of](/topic/number-of), [azure](/topic/azure), [instead of](/topic/instead-of)
### Top Social Posts [#](/creator/twitter::1970714562920857600/posts)
---
Top posts by engagements in the last XX hours

"1/6 Why do some Databricks ingestion pipelines run X faster even when reading the same data from S3 or ADLS Heres one small config that can massively speed up your ingestion jobs 👇 #Databricks #DataEngineering #Autoloader #Spark #Performance"  
[X Link](https://x.com/databricksdaily/status/1981220118257684603) [@databricksdaily](/creator/x/databricksdaily) 2025-10-23T04:44Z XX followers, XX engagements


"4/7 So Each partition X write task X Parquet file. #spark #databricks #partition"  
[X Link](https://x.com/databricksdaily/status/1981320065904775432) [@databricksdaily](/creator/x/databricksdaily) 2025-10-23T11:21Z XX followers, XX engagements


"5/7 Why doesnt Spark just combine them automatically Because that would: Break parallelism ⚡ Require shuffling data back to one node 🧳 Cause memory pressure on the driver 🧠 Make large writes painfully slow 🐢 #spark #databricks #partition"  
[X Link](https://x.com/databricksdaily/status/1981320068916269225) [@databricksdaily](/creator/x/databricksdaily) 2025-10-23T11:21Z XX followers, XX engagements


"6/7 But why does this matter Because file count impacts: Performance: too many tiny files = slow reads Storage: more metadata overhead Optimization: combine files smartly for future reads #spark #databricks #partition"  
[X Link](https://x.com/databricksdaily/status/1981320071864860853) [@databricksdaily](/creator/x/databricksdaily) 2025-10-23T11:21Z XX followers, XX engagements


"2/6 When you use Autoloader or to bring data into Databricks Spark lists all files in the source path before reading. That file listing step can become a bottleneck especially when you have thousands of small files. #Databricks #DataEngineering #Autoloader #Spark #Performance"  
[X Link](https://x.com/databricksdaily/status/1981220121650823301) [@databricksdaily](/creator/x/databricksdaily) 2025-10-23T04:44Z XX followers, XX engagements


"4/6 This simple flag can reduce file discovery time by up to XX% for large-scale ingestion jobs #Databricks #DataEngineering #Autoloader #Spark #Performance"  
[X Link](https://x.com/databricksdaily/status/1981220128302985485) [@databricksdaily](/creator/x/databricksdaily) 2025-10-23T04:44Z XX followers, XX engagements


"5/6 Bonus tip: For even better performance combine this with: #Databricks #DataEngineering #Autoloader #Spark #Performance"  
[X Link](https://x.com/databricksdaily/status/1981220132954526088) [@databricksdaily](/creator/x/databricksdaily) 2025-10-23T04:44Z XX followers, XX engagements


"6/6 Tiny configs like this make a big difference when your pipeline scales to millions of files. Use notifications. Control batch size. Let Databricks focus on processing not scanning. #Databricks #DataEngineering #Autoloader #Spark #Performance"  
[X Link](https://x.com/databricksdaily/status/1981220135810855191) [@databricksdaily](/creator/x/databricksdaily) 2025-10-23T04:44Z XX followers, XX engagements


"1/7 Why the number of Parquet files = number of partitions in Spark (and why it matters) Lets break this down 👇 #spark #databricks #partition"  
[X Link](https://x.com/databricksdaily/status/1981320056714981728) [@databricksdaily](/creator/x/databricksdaily) 2025-10-23T11:21Z XX followers, XX engagements


"2/7 You write your DataFrame to Parquet in Spark.and boom you get N Parquet files. Thats not random. Each file = one partition. #spark #databricks #partition"  
[X Link](https://x.com/databricksdaily/status/1981320060276003039) [@databricksdaily](/creator/x/databricksdaily) 2025-10-23T11:21Z XX followers, XX engagements


"3/7 Why Spark is a distributed engine. Your DataFrame is split into partitions and each partition is processed by one task (and one executor core). When you save each task writes its own file. No merging. No central file collector. Parallel by design #spark #databricks #partition"  
[X Link](https://x.com/databricksdaily/status/1981320063044243475) [@databricksdaily](/creator/x/databricksdaily) 2025-10-23T11:21Z XX followers, XX engagements


"3/6 The fix: enable file notification mode (instead of directory listing). This lets Autoloader use event notifications (like AWS SQS or Azure Queue) to discover new files instantly no scanning #Databricks #DataEngineering #Autoloader #Spark #Performance"  
[X Link](https://x.com/databricksdaily/status/1981220124117147959) [@databricksdaily](/creator/x/databricksdaily) 2025-10-23T04:44Z XX followers, XX engagements


"7/7 So you should aim for partitions sized around 128256 MB large enough for efficient reads small enough for parallelism. #spark #databricks #partition"  
[X Link](https://x.com/databricksdaily/status/1981320074301739447) [@databricksdaily](/creator/x/databricksdaily) 2025-10-23T11:21Z XX followers, XX engagements

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@databricksdaily Avatar @databricksdaily databricksdaily

databricksdaily posts on X about databricks, listing, make a, number of the most. They currently have XX followers and XX posts still getting attention that total XXX engagements in the last XX hours.

Engagements: XXX #

Engagements Line Chart

  • X Week XXX +241%

Mentions: XX #

Mentions Line Chart

  • X Week XX +533%

Followers: XX #

Followers Line Chart

  • X Week XX +27%

CreatorRank: XXXXXXXXX #

CreatorRank Line Chart

Social Influence #


Social category influence technology brands

Social topic influence databricks #41, listing #1686, make a, number of, azure, instead of

Top Social Posts #


Top posts by engagements in the last XX hours

"1/6 Why do some Databricks ingestion pipelines run X faster even when reading the same data from S3 or ADLS Heres one small config that can massively speed up your ingestion jobs 👇 #Databricks #DataEngineering #Autoloader #Spark #Performance"
X Link @databricksdaily 2025-10-23T04:44Z XX followers, XX engagements

"4/7 So Each partition X write task X Parquet file. #spark #databricks #partition"
X Link @databricksdaily 2025-10-23T11:21Z XX followers, XX engagements

"5/7 Why doesnt Spark just combine them automatically Because that would: Break parallelism ⚡ Require shuffling data back to one node 🧳 Cause memory pressure on the driver 🧠 Make large writes painfully slow 🐢 #spark #databricks #partition"
X Link @databricksdaily 2025-10-23T11:21Z XX followers, XX engagements

"6/7 But why does this matter Because file count impacts: Performance: too many tiny files = slow reads Storage: more metadata overhead Optimization: combine files smartly for future reads #spark #databricks #partition"
X Link @databricksdaily 2025-10-23T11:21Z XX followers, XX engagements

"2/6 When you use Autoloader or to bring data into Databricks Spark lists all files in the source path before reading. That file listing step can become a bottleneck especially when you have thousands of small files. #Databricks #DataEngineering #Autoloader #Spark #Performance"
X Link @databricksdaily 2025-10-23T04:44Z XX followers, XX engagements

"4/6 This simple flag can reduce file discovery time by up to XX% for large-scale ingestion jobs #Databricks #DataEngineering #Autoloader #Spark #Performance"
X Link @databricksdaily 2025-10-23T04:44Z XX followers, XX engagements

"5/6 Bonus tip: For even better performance combine this with: #Databricks #DataEngineering #Autoloader #Spark #Performance"
X Link @databricksdaily 2025-10-23T04:44Z XX followers, XX engagements

"6/6 Tiny configs like this make a big difference when your pipeline scales to millions of files. Use notifications. Control batch size. Let Databricks focus on processing not scanning. #Databricks #DataEngineering #Autoloader #Spark #Performance"
X Link @databricksdaily 2025-10-23T04:44Z XX followers, XX engagements

"1/7 Why the number of Parquet files = number of partitions in Spark (and why it matters) Lets break this down 👇 #spark #databricks #partition"
X Link @databricksdaily 2025-10-23T11:21Z XX followers, XX engagements

"2/7 You write your DataFrame to Parquet in Spark.and boom you get N Parquet files. Thats not random. Each file = one partition. #spark #databricks #partition"
X Link @databricksdaily 2025-10-23T11:21Z XX followers, XX engagements

"3/7 Why Spark is a distributed engine. Your DataFrame is split into partitions and each partition is processed by one task (and one executor core). When you save each task writes its own file. No merging. No central file collector. Parallel by design #spark #databricks #partition"
X Link @databricksdaily 2025-10-23T11:21Z XX followers, XX engagements

"3/6 The fix: enable file notification mode (instead of directory listing). This lets Autoloader use event notifications (like AWS SQS or Azure Queue) to discover new files instantly no scanning #Databricks #DataEngineering #Autoloader #Spark #Performance"
X Link @databricksdaily 2025-10-23T04:44Z XX followers, XX engagements

"7/7 So you should aim for partitions sized around 128256 MB large enough for efficient reads small enough for parallelism. #spark #databricks #partition"
X Link @databricksdaily 2025-10-23T11:21Z XX followers, XX engagements

@databricksdaily
/creator/twitter::databricksdaily