All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@livehybrid  The screenshot is working fine. but if  i implement the same for multiple i am not getting result. do i miss anything in below [| makeresults | eval text_search="*$text_search$*" |... See more...
@livehybrid  The screenshot is working fine. but if  i implement the same for multiple i am not getting result. do i miss anything in below [| makeresults | eval text_search="*$text_search$*" | eval escaped=replace(text_search, "\\", "\\\\") | eval FileSource=escaped, RemoteHost=escaped, LocalPath=escaped, RemotePath=escaped | return FileSource RemoteHost LocalPath RemotePath ]
Assuming you have admin access, you can find the source types under the settings menu option. From this you can find out what extractions are configured, as I suspect these aren't dealing with your c... See more...
Assuming you have admin access, you can find the source types under the settings menu option. From this you can find out what extractions are configured, as I suspect these aren't dealing with your custom field as you expect. You could also try using the extract command ((host="*.prod.domain.com" "Carrier updates summary;") OR (index=prod_index_eks kub.pod_name="domain-*" log="*Carrier updates summary;*")) | extract | eval message=coalesce(message, log) | table message
The logs are coming from a Django application, and the sourcetype is set to the name of the application (as shown by | metasearch sourcetype=* command). This is how we are sending logs from the appli... See more...
The logs are coming from a Django application, and the sourcetype is set to the name of the application (as shown by | metasearch sourcetype=* command). This is how we are sending logs from the application logger.info('Carrier updates summary; message="The following updates message", user="john_doe", carrier_slug="example_carrier"') We are using below query for extraction ((host="*.prod.domain.com" "Carrier updates summary;") OR (index=prod_index_eks kub.pod_name="domain-*" log="*Carrier updates summary;*")) | eval message=coalesce(message, log) | table message I hope this provides some context about our logs. Apologies if it doesn’t — I’m still very new to Splunk. I really appreciate your help!
This worked well @ITWhisperer . Thanks for the quick turnaround
In earlier versions of splunk i remember there use to be an option to disable active user and it will then show as status of inactive/user disabled. Now i can't see any option to disable any user. On... See more...
In earlier versions of splunk i remember there use to be an option to disable active user and it will then show as status of inactive/user disabled. Now i can't see any option to disable any user. Only delete option is there. Anyone any idea how to disable a user now or if this capability of splunk is removed what's the alternate.
| spath violation_stats output=violation_stats | where isnotnull(violation_stats)
I have a few records in the splunk like this {"timeStamp":"2025-04-21T08:21:40.000Z","eventId":"test_eventId_1","orignId":"test_originId_1","tenantId":"test_tenantId","violation_stats":{"Key1":11,"K... See more...
I have a few records in the splunk like this {"timeStamp":"2025-04-21T08:21:40.000Z","eventId":"test_eventId_1","orignId":"test_originId_1","tenantId":"test_tenantId","violation_stats":{"Key1":11,"Key2":23,"Key3":1,"Key4":1,"Key5":1},"lastModifier":"test_admin","rawEventType":"test_event"} {"timeStamp":"2025-04-21T08:21:40.000Z","eventId":"test_eventId_2","orignId":"test_originId_2","tenantId":"test_tenantId","violation_stats":{"Key1":1,"Key10":1},"lastModifier":"test_admin","rawEventType":"test_event"} {"timeStamp":"2025-04-21T08:21:40.000Z","eventId":"test_eventId_3","orignId":"test_originId_3","tenantId":"test_tenantId","violation_stats":{"Key6":1,"Key7":2,"Key8":1,"Key9":4},"lastModifier":"test_admin","rawEventType":"test_event"} {"timeStamp":"2025-04-21T08:21:40.000Z","eventId":"test_eventId_4","orignId":"test_originId_4","tenantId":"test_tenantId","lastModifier":"test_admin","rawEventType":"test_event"}   Now, I need to check how many records contain the violation_stats field and how many do not. I tried the below query, but it didn't work index="my_index" | search violation_stats{}=*   I checked online and got to know that I might need to use spath. However, since the keys inside the json are not static, I am not sure how I can use spath for my result.
What sourcetype and extraction configuration are you using?
Is this "pulling the file from the FTP server into my local Splunk server" using ftp? If so, try pulling the file from the FTP server into my local Splunk server into a different directory, before c... See more...
Is this "pulling the file from the FTP server into my local Splunk server" using ftp? If so, try pulling the file from the FTP server into my local Splunk server into a different directory, before copying it on the splunk server to the monitored directory.
Hi, We are using the event field message in our alert, but in some cases, the field is not being parsed correctly. For example, in the attached screenshot, the source event contains the full text in ... See more...
Hi, We are using the event field message in our alert, but in some cases, the field is not being parsed correctly. For example, in the attached screenshot, the source event contains the full text in raw format, i.e., message="The full message". However, when we check the Event under the Action tab, it only shows the first word of the message — "The" — which results in incorrect information being sent in alerts. Could someone please help us resolve this issue? I appreciate any help you can provide.
Is there any solution to what i'm facing??  Here's what I’ve tested so far. 1: WinSCP uploads file.json to the FTP server → Splunk local server retrieves the file to a local directory → Splunk read... See more...
Is there any solution to what i'm facing??  Here's what I’ve tested so far. 1: WinSCP uploads file.json to the FTP server → Splunk local server retrieves the file to a local directory → Splunk reads and indexes the data. sha256sum /splunk_local/file.json 45b01fabce6f2a75742c192143055d33e5aa28be3d2c3ad324dd2e0af5adf8dd 2: Deleted file.json from the FTP server → Used WinSCP to re-upload the same file.json → Splunk local server pulled the file to the local directory → Splunk did not index the file.json sha256sum /splunk_local/file.json 45b01fabce6f2a75742c192143055d33e5aa28be3d2c3ad324dd2e0af5adf8dd 3: WinSCP overwrote file.json on the FTP server with a version containing both new and existing entries → Splunk local server pulled the updated file to the local directory → Splunk re-read and re-indexed the entire file, including previously indexed data sha256sum /splunk_local/file.json 2217ee097b7d77ed4b2eabc695b89e5f30d4e8b85c8cbd261613ce65cda0b851 I noticed that the SHA value only changes when a new entry is added to the file, as seen in scenario 3. However, in scenarios 1 and 2, the SHA value remains the same—even if I delete and re-upload the exact same file to the FTP server and pull it into my local Splunk server. And yes, I'm pulling the file from the FTP server into my local Splunk server, where the file is being monitored.
Here's what I’ve tested so far. 1: WinSCP uploads file.json to the FTP server → Splunk local server retrieves the file to a local directory → Splunk reads and indexes the data. sha256sum /splunk_lo... See more...
Here's what I’ve tested so far. 1: WinSCP uploads file.json to the FTP server → Splunk local server retrieves the file to a local directory → Splunk reads and indexes the data. sha256sum /splunk_local/file.json 45b01fabce6f2a75742c192143055d33e5aa28be3d2c3ad324dd2e0af5adf8dd 2: Deleted file.json from the FTP server → Used WinSCP to re-upload the same file.json → Splunk local server pulled the file to the local directory → Splunk did not index the file.json sha256sum /splunk_local/file.json 45b01fabce6f2a75742c192143055d33e5aa28be3d2c3ad324dd2e0af5adf8dd 3: WinSCP overwrote file.json on the FTP server with a version containing both new and existing entries → Splunk local server pulled the updated file to the local directory → Splunk re-read and re-indexed the entire file, including previously indexed data sha256sum /splunk_local/file.json 2217ee097b7d77ed4b2eabc695b89e5f30d4e8b85c8cbd261613ce65cda0b851 I noticed that the SHA value only changes when a new entry is added to the file, as seen in scenario 3. However, in scenarios 1 and 2, the SHA value remains the same—even if I delete and re-upload the exact same file to the FTP server and pull it into my local Splunk server. And yes, I'm pulling the file from the FTP server into my local Splunk server, where the file is being monitored.
@ITWhisperer  Data flushing is enabled for the required tables.
Could this be that the file you are monitoring on the database server has not been closed / flushed so the forwarder is unaware of any updates until later?
This is probably because your ftp server is deleting the existing file when you overwrite it so the forwarder sees it as a new file even if it has the same name and content. Try copying the received ... See more...
This is probably because your ftp server is deleting the existing file when you overwrite it so the forwarder sees it as a new file even if it has the same name and content. Try copying the received file on the ftp server to the monitored directory
Salting crc is very very rarely the way to go. Usually it's about the length of the initCrcLength. If your files contain a long "header" which is constant between files, you need to raise its value. ... See more...
Salting crc is very very rarely the way to go. Usually it's about the length of the initCrcLength. If your files contain a long "header" which is constant between files, you need to raise its value. But. Problems with crc duplication manifest themselves with the opposite to what you're getting - data _not_ being indexed at all due to Splunk considering two files the same, not indexing data multiple times.
Hi, I'm facing an issue where the same data gets indexed multiple times every time the JSON file is pulled from the FTP server. Each time the JSON file is retrieved and placed on my local Splunk se... See more...
Hi, I'm facing an issue where the same data gets indexed multiple times every time the JSON file is pulled from the FTP server. Each time the JSON file is retrieved and placed on my local Splunk server, it overwrites the existing file. I don't have control over the content being placed on the FTP server, it could either be an entirely new entry or an existing entry with new data added, as shown below. I'm monitoring a specific file, as its name, type, and path remain consistent. From what I can observe, every time the file has new entries alongside previously indexed data, it is re-indexed, causing duplication. Example: file.json 2024-04-21 14:00 - row 1 2024-04-21 14:10 - row 2 overwritten file.json 2024-04-21 14:00 - row 1 2024-04-21 14:10 - row 2 2024-04-21 14:20 - row 3 Additionally, I checked the sha256sum of the JSON file after it’s pulled into my local Splunk server. The hash value changes before and after the file is overwritten. file.json: 2217ee097b7d77ed4b2eabc695b89e5f30d4e8b85c8cbd261613ce65cda0b851 /home/ws/logs/###.json overwritten file.json: 45b01fabce6f2a75742c192143055d33e5aa28be3d2c3ad324dd2e0af5adf8dd /home/ws/logs//###.json I've tried using initCrcLength, crcSalt, and followTail, but they don't seem to prevent the duplication, as Splunk still indexes it as new data. Any assistance would be appreciated, as I can't seem to prevent the duplication in indexing.
Hello Splunkers!! Issue Description We are experiencing a significant delay in data ingestion (>10 hours) for one index in Project B within our Splunk environment. Interestingly, Project A, which o... See more...
Hello Splunkers!! Issue Description We are experiencing a significant delay in data ingestion (>10 hours) for one index in Project B within our Splunk environment. Interestingly, Project A, which operates with a nearly identical configuration, does not exhibit this issue, and data ingestion occurs as expected. Steps Taken to Diagnose the Issue To identify the root cause of the delayed ingestion in Project B, the following checks were performed: Timezone Consistency: Verified that the timezone settings on the database server (source of the data) and the Splunk server are identical, ruling out timestamp misalignment. Props Configuration: Confirmed that the props.conf settings align with the event patterns, ensuring proper event parsing and processing. System Performance: Monitored CPU performance on the Splunk server and found no resource bottlenecks or excessive load. Note : Configuration Comparison: Conducted a thorough comparison of configurations between Project A and Project B, including inputs, outputs, and indexing settings, and found no apparent differences. Observations The issue is isolated to Project B, despite both projects sharing similar configurations and infrastructure. Project A processes data without delays, indicating that the Splunk environment and database connectivity are generally functional. Screenshot 1 : Screenshot 2 : Event sample : TIMESTAMP="2025-04-17T21:17:05.868000Z",SOURCE="TransportControllerManager_x.onStatusChangedTransferRequest",IDEVENT="1312670",EVENTTYPEKEY="TRFREQ_CANCELLED",INSTANCEID="210002100",OBJECTTYPE="TRANSFERREQUEST",OPERATOR="1",OPERATORID="1",TASKID="10030391534",TSULABEL="309360376000158328" props.conf [wmc_events] CHARSET=AUTO KV_MODE=AUTO SHOULD_LINEMERGE=false description= WMC events received from the Oracle database, formatted as key-value pairs pulldown_type=true TIME_PREFIX = ^TIMESTAMP= TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6NZ TZ = UTC NO_BINARY_CHECK = true TRUNCATE = 10000000 #MAX_EVENTS = 100000 ANNOTATE_PUNCT = false    
Hi @livehybrid,  Previously, I was monitoring a folder path. However, after making some adjustments, I’ve switched to monitoring a specific file instead, since the name, type, and path will always r... See more...
Hi @livehybrid,  Previously, I was monitoring a folder path. However, after making some adjustments, I’ve switched to monitoring a specific file instead, since the name, type, and path will always remain consistent. Now, I'm encountering an issue where the same data gets indexed multiple times whenever the JSON file is pulled from the FTP server. Each time the JSON file is retrieved and placed on my local Splunk server, it overwrites the existing file. I’ve tried using initCrcLength and crcSalt, but they don’t seem to prevent the duplication, as Splunk still indexes it as new data. Additionally, I checked the sha256sum of the JSON file after it’s pulled into my local Splunk server. The hash value changes before and after the new data overwrites the file. I'm not entirely sure how Splunk determines the file’s initial 256-byte hash for comparison. 1: 2217ee097b7d77ed4b2eabc695b89e5f30d4e8b85c8cbd261613ce65cda0b851 /home/ws/logs/cpf_case_final.json 2: 45b01fabce6f2a75742c192143055d33e5aa28be3d2c3ad324dd2e0af5adf8dd /home/ws/logs/cpf_case_final.json
Thank you @livehybrid,  i edited the splunk_metadata.csv and removed the existing entires and added the below key,index,value  and restarted the SC4S : fortinet_fortios_traffic, index, index_new ... See more...
Thank you @livehybrid,  i edited the splunk_metadata.csv and removed the existing entires and added the below key,index,value  and restarted the SC4S : fortinet_fortios_traffic, index, index_new fortinet_fortios_utm, index, index_new That did not work either.  Am i still missing something here ? Also is there a way to change all (netfw,netops,oswin,osnix and so on) the default index to a new single index ?