All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes, I'm accessing my FTP server using the FTP method. However, this shouldn't make a difference whether I'm using FTP or SFTP, right? I'm still encountering the same issue, even after copying the fi... See more...
Yes, I'm accessing my FTP server using the FTP method. However, this shouldn't make a difference whether I'm using FTP or SFTP, right? I'm still encountering the same issue, even after copying the file to a different folder before moving it to the monitored directory on the Splunk server. Just to add on, my file type is JSON.  [Mon Apr 21 20:28:01 +08 2025] Attempting FTP to 192.168.80.139 Connected to 192.168.80.139 (192.168.80.139). 220 (vsFTPd 3.0.3) 331 Please specify the password. 230 Login successful. 250 Directory successfully changed. Local directory now /home/ws/pull 221 Goodbye. '/home/ws/pull/###_case_final.json' -> '/home/ws/logs/###_case_final.json' [Mon Apr 21 20:28:12 +08 2025] Attempting FTP to 192.168.80.139 Connected to 192.168.80.139 (192.168.80.139). 220 (vsFTPd 3.0.3) 331 Please specify the password. 230 Login successful. 250 Directory successfully changed. Local directory now /home/ws/pull local: ###_case_final.json remote: ###_case_final.json 227 Entering Passive Mode (192,168,80,139,249,175). 150 Opening BINARY mode data connection for ###_case_final.json (1455 bytes). 226 Transfer complete. 1455 bytes received in 8.5e-05 secs (17117.65 Kbytes/sec) 221 Goodbye. '/home/ws/pull/###_case_final.json' -> '/home/ws/logs/###_case_final.json' As of now, my inputs.conf contain the following only.
Hi All,        I have 4 Heavy forwarder servers sending data through 5 indexers server1 acts as syslog server which has autoLBFrequency as 10 and maxQueueSize as 1000MB server2 acts as syslog and ... See more...
Hi All,        I have 4 Heavy forwarder servers sending data through 5 indexers server1 acts as syslog server which has autoLBFrequency as 10 and maxQueueSize as 1000MB server2 acts as syslog and heavy forwarder which has autoLBFrequency as 10 and maxQueueSize as 500MB server3 acts heavy forwarder which has autoLBFrequency as 10 and maxQueueSize as 500MB server4 acts heavy forwarder which has autoLBFrequency as 10 and maxQueueSize as 500MB    Receiving blocked=true in metrics.log while syslog/heavy forwarder trying to send data through indexer servers. Due to this index ingestion is getting delayed and data is coming to Splunk 2-3 hours late.         And in one of the 5 indexer servers CPU is always highly utilized from 99-100% consistently which has 24 CPU, other indexer servers also running with 24 CPU.          Planning to upgrade highly utilized indexer server alone from 24 to 32         Kindly suggest by updating below in outputs.conf will reduce/stop the "blocked=true" in metrics.log and CPU load on indexer will be normal before upgrading the CPU.         OR we need to do both, changes in outputs.conf and upgrading the CPU. If both can be done which is the first we can try. Kindly help. autoLBFrequency = 5 maxQueueSize = 1000MB aggQueueSize = 7000 outputQueueSize = 7000
Thanks for your response. I’ve already attempted this, but it didn’t work as expected
Splunk (monitor input to be precise) doesn't care about the checksum of the whole file. It is obvious that the hash of the whole file will change as soon as _anything_ changes within the file. Whethe... See more...
Splunk (monitor input to be precise) doesn't care about the checksum of the whole file. It is obvious that the hash of the whole file will change as soon as _anything_ changes within the file. Whether it is a complete rewrite of the whole file contents or just adding a single byte at the end - the hash will change. The monitor input stores some values regarding the state of the file. It stores the initCrc value which will obviously change if the file is overwritten (and length of which can be manipulated in settings). But it also stores the seekCrc which is a checksum of the last read 256 bytes (and a position of those 256 bytes within the file). I suppose in your case the file ends by closing the json array, but after subsequent "append", the actual array is appended so its closing bracket is removed, another json structure is added and after that the array is closed in a new place. Unfortunately, you can't do much about it. As I said before - you'd be best off by scripting some external solution to read that array and dump its contents in a sane manner to another file for reading.
Hi All, I am looking for help onboard citrix VDI logs & Citrix WAF logs into the splunk. Splunk add on not available. also we got confirmed splunk support. can anyone help & guide what is best prac... See more...
Hi All, I am looking for help onboard citrix VDI logs & Citrix WAF logs into the splunk. Splunk add on not available. also we got confirmed splunk support. can anyone help & guide what is best practice onboard citrix VDI & WAF logs. much appreciated if you have solutions.  
@Mridu27  You can either remove all roles associated with the user or simply delete the user all together. There is no way to disable accounts unfortunately. Some suggestions: Take away all thei... See more...
@Mridu27  You can either remove all roles associated with the user or simply delete the user all together. There is no way to disable accounts unfortunately. Some suggestions: Take away all their roles including user. Change the passwords on the accounts (you will need to give them new passwords when you are done) You could edit web.conf and use the acceptFrom parameter to limit logins only to specific IPs or a subnet.   acceptFrom = <network_acl> ... * Lists a set of networks or addresses from which to accept connections. * Separate multiple rules with commas or spaces. * Each rule can be in one of the following formats: 1. A single IPv4 or IPv6 address (examples: "10.1.2.3", "fe80::4a3") 2. A Classless Inter-Domain Routing (CIDR) block of addresses (examples: "10/8", "192.168.1/24", "fe80:1234/32") 3. A DNS name, possibly with a "*" used as a wildcard (examples: "myhost.example.com", "*.splunk.com") 4. "*", which matches anything * You can also prefix an entry with '!' to cause the rule to reject the connection. The input applies rules in order, and uses the first one that matches. For example, "!10.1/16, *" allows connections from everywhere except the 10.1.*.* network. * Default: "*" (accept from anywhere)  
@livehybrid  The screenshot is working fine. but if  i implement the same for multiple i am not getting result. do i miss anything in below [| makeresults | eval text_search="*$text_search$*" |... See more...
@livehybrid  The screenshot is working fine. but if  i implement the same for multiple i am not getting result. do i miss anything in below [| makeresults | eval text_search="*$text_search$*" | eval escaped=replace(text_search, "\\", "\\\\") | eval FileSource=escaped, RemoteHost=escaped, LocalPath=escaped, RemotePath=escaped | return FileSource RemoteHost LocalPath RemotePath ]
Assuming you have admin access, you can find the source types under the settings menu option. From this you can find out what extractions are configured, as I suspect these aren't dealing with your c... See more...
Assuming you have admin access, you can find the source types under the settings menu option. From this you can find out what extractions are configured, as I suspect these aren't dealing with your custom field as you expect. You could also try using the extract command ((host="*.prod.domain.com" "Carrier updates summary;") OR (index=prod_index_eks kub.pod_name="domain-*" log="*Carrier updates summary;*")) | extract | eval message=coalesce(message, log) | table message
The logs are coming from a Django application, and the sourcetype is set to the name of the application (as shown by | metasearch sourcetype=* command). This is how we are sending logs from the appli... See more...
The logs are coming from a Django application, and the sourcetype is set to the name of the application (as shown by | metasearch sourcetype=* command). This is how we are sending logs from the application logger.info('Carrier updates summary; message="The following updates message", user="john_doe", carrier_slug="example_carrier"') We are using below query for extraction ((host="*.prod.domain.com" "Carrier updates summary;") OR (index=prod_index_eks kub.pod_name="domain-*" log="*Carrier updates summary;*")) | eval message=coalesce(message, log) | table message I hope this provides some context about our logs. Apologies if it doesn’t — I’m still very new to Splunk. I really appreciate your help!
This worked well @ITWhisperer . Thanks for the quick turnaround
In earlier versions of splunk i remember there use to be an option to disable active user and it will then show as status of inactive/user disabled. Now i can't see any option to disable any user. On... See more...
In earlier versions of splunk i remember there use to be an option to disable active user and it will then show as status of inactive/user disabled. Now i can't see any option to disable any user. Only delete option is there. Anyone any idea how to disable a user now or if this capability of splunk is removed what's the alternate.
| spath violation_stats output=violation_stats | where isnotnull(violation_stats)
I have a few records in the splunk like this {"timeStamp":"2025-04-21T08:21:40.000Z","eventId":"test_eventId_1","orignId":"test_originId_1","tenantId":"test_tenantId","violation_stats":{"Key1":11,"K... See more...
I have a few records in the splunk like this {"timeStamp":"2025-04-21T08:21:40.000Z","eventId":"test_eventId_1","orignId":"test_originId_1","tenantId":"test_tenantId","violation_stats":{"Key1":11,"Key2":23,"Key3":1,"Key4":1,"Key5":1},"lastModifier":"test_admin","rawEventType":"test_event"} {"timeStamp":"2025-04-21T08:21:40.000Z","eventId":"test_eventId_2","orignId":"test_originId_2","tenantId":"test_tenantId","violation_stats":{"Key1":1,"Key10":1},"lastModifier":"test_admin","rawEventType":"test_event"} {"timeStamp":"2025-04-21T08:21:40.000Z","eventId":"test_eventId_3","orignId":"test_originId_3","tenantId":"test_tenantId","violation_stats":{"Key6":1,"Key7":2,"Key8":1,"Key9":4},"lastModifier":"test_admin","rawEventType":"test_event"} {"timeStamp":"2025-04-21T08:21:40.000Z","eventId":"test_eventId_4","orignId":"test_originId_4","tenantId":"test_tenantId","lastModifier":"test_admin","rawEventType":"test_event"}   Now, I need to check how many records contain the violation_stats field and how many do not. I tried the below query, but it didn't work index="my_index" | search violation_stats{}=*   I checked online and got to know that I might need to use spath. However, since the keys inside the json are not static, I am not sure how I can use spath for my result.
What sourcetype and extraction configuration are you using?
Is this "pulling the file from the FTP server into my local Splunk server" using ftp? If so, try pulling the file from the FTP server into my local Splunk server into a different directory, before c... See more...
Is this "pulling the file from the FTP server into my local Splunk server" using ftp? If so, try pulling the file from the FTP server into my local Splunk server into a different directory, before copying it on the splunk server to the monitored directory.
Hi, We are using the event field message in our alert, but in some cases, the field is not being parsed correctly. For example, in the attached screenshot, the source event contains the full text in ... See more...
Hi, We are using the event field message in our alert, but in some cases, the field is not being parsed correctly. For example, in the attached screenshot, the source event contains the full text in raw format, i.e., message="The full message". However, when we check the Event under the Action tab, it only shows the first word of the message — "The" — which results in incorrect information being sent in alerts. Could someone please help us resolve this issue? I appreciate any help you can provide.
Is there any solution to what i'm facing??  Here's what I’ve tested so far. 1: WinSCP uploads file.json to the FTP server → Splunk local server retrieves the file to a local directory → Splunk read... See more...
Is there any solution to what i'm facing??  Here's what I’ve tested so far. 1: WinSCP uploads file.json to the FTP server → Splunk local server retrieves the file to a local directory → Splunk reads and indexes the data. sha256sum /splunk_local/file.json 45b01fabce6f2a75742c192143055d33e5aa28be3d2c3ad324dd2e0af5adf8dd 2: Deleted file.json from the FTP server → Used WinSCP to re-upload the same file.json → Splunk local server pulled the file to the local directory → Splunk did not index the file.json sha256sum /splunk_local/file.json 45b01fabce6f2a75742c192143055d33e5aa28be3d2c3ad324dd2e0af5adf8dd 3: WinSCP overwrote file.json on the FTP server with a version containing both new and existing entries → Splunk local server pulled the updated file to the local directory → Splunk re-read and re-indexed the entire file, including previously indexed data sha256sum /splunk_local/file.json 2217ee097b7d77ed4b2eabc695b89e5f30d4e8b85c8cbd261613ce65cda0b851 I noticed that the SHA value only changes when a new entry is added to the file, as seen in scenario 3. However, in scenarios 1 and 2, the SHA value remains the same—even if I delete and re-upload the exact same file to the FTP server and pull it into my local Splunk server. And yes, I'm pulling the file from the FTP server into my local Splunk server, where the file is being monitored.
Here's what I’ve tested so far. 1: WinSCP uploads file.json to the FTP server → Splunk local server retrieves the file to a local directory → Splunk reads and indexes the data. sha256sum /splunk_lo... See more...
Here's what I’ve tested so far. 1: WinSCP uploads file.json to the FTP server → Splunk local server retrieves the file to a local directory → Splunk reads and indexes the data. sha256sum /splunk_local/file.json 45b01fabce6f2a75742c192143055d33e5aa28be3d2c3ad324dd2e0af5adf8dd 2: Deleted file.json from the FTP server → Used WinSCP to re-upload the same file.json → Splunk local server pulled the file to the local directory → Splunk did not index the file.json sha256sum /splunk_local/file.json 45b01fabce6f2a75742c192143055d33e5aa28be3d2c3ad324dd2e0af5adf8dd 3: WinSCP overwrote file.json on the FTP server with a version containing both new and existing entries → Splunk local server pulled the updated file to the local directory → Splunk re-read and re-indexed the entire file, including previously indexed data sha256sum /splunk_local/file.json 2217ee097b7d77ed4b2eabc695b89e5f30d4e8b85c8cbd261613ce65cda0b851 I noticed that the SHA value only changes when a new entry is added to the file, as seen in scenario 3. However, in scenarios 1 and 2, the SHA value remains the same—even if I delete and re-upload the exact same file to the FTP server and pull it into my local Splunk server. And yes, I'm pulling the file from the FTP server into my local Splunk server, where the file is being monitored.
@ITWhisperer  Data flushing is enabled for the required tables.
Could this be that the file you are monitoring on the database server has not been closed / flushed so the forwarder is unaware of any updates until later?