All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @tech_g706  The official docs are at https://docs.typo3.org/m/typo3/reference-coreapi/main/en-us/ApiOverview/Logging/Index.html if they're any use to you Let me know if theres anything else I... See more...
Hi @tech_g706  The official docs are at https://docs.typo3.org/m/typo3/reference-coreapi/main/en-us/ApiOverview/Logging/Index.html if they're any use to you Let me know if theres anything else I can help with.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @capjacksparo  Please can you confirm where the splunk_metadata.csv  is that you updated? Im not sure its possible to overwrite the defaults - other than by using the splunk_metadata.csv file. ... See more...
Hi @capjacksparo  Please can you confirm where the splunk_metadata.csv  is that you updated? Im not sure its possible to overwrite the defaults - other than by using the splunk_metadata.csv file.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @ganesanvc  Were you able to try the below? @livehybrid wrote: Hi @ganesanvc  Does "text_search" come from a search result - or is this something like a token you are passing in? I couldnt ... See more...
Hi @ganesanvc  Were you able to try the below? @livehybrid wrote: Hi @ganesanvc  Does "text_search" come from a search result - or is this something like a token you are passing in? I couldnt tell from the request but if its coming from a token and you want to apply the additional escaping then you can do this: index=main source="answersDemo" [| makeresults | eval text_search="*\\Test\abc\test\abc\xxx\OUT\*" | eval FileSource=replace(text_search, "\\\\", "\\\\\\\\") | return FileSource ]   Note: I used a sample event in index=main as you can see in the results above using; | windbag | head 1 | eval _raw="Test Event for SplunkAnswers user=Demo FileSource=\"MyFileSystem\\Test\\abc\\test\\abc\\xxx\\OUT\\test.exe\" fileType=exe" | eval source="answersDemo" | collect index=main output_format=hec I may have got the wrong end of the stick with what you're looking for here but let me know!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @uagraw01  Its very suspicious that the time looks to be ~ - 36000 seconds - being pretty much *exactly* 10 hours. Could there be an issue with timezones here? It doesnt sound like the data is b... See more...
Hi @uagraw01  Its very suspicious that the time looks to be ~ - 36000 seconds - being pretty much *exactly* 10 hours. Could there be an issue with timezones here? It doesnt sound like the data is blocked for exactly 10 hours in the data ingestion pipeline, it feels more likely that a previous server in the ingestion journey has an incorrect timezone.  The timezone set in props.conf for the given sourcetype (TZ=) will be used based on the "the timezone provided by the forwarder." - Its worth checking if the forwarder in Project B, assuming this is different to Project A?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @ws  If you are using a script to do this, it might be worth trying to change the process a little bit - instead of downloading the file and overwrite the existing file, try downloading the file ... See more...
Hi @ws  If you are using a script to do this, it might be worth trying to change the process a little bit - instead of downloading the file and overwrite the existing file, try downloading the file as a temp file, then write the contents to the existing file. This will prevent Splunk thinking it is a new file. Theres an interesting thread here https://community.splunk.com/t5/Getting-Data-In/Duplicate-indexing-of-data/m-p/376619 which might help you. Another thing you could do is change the logging to DEBUG for the following components: TailingProcessor BatchReader WatchedFile FileTracker Then see what Splunk logs the next time you update the file.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @Mridu27  Unfortunately there isnt a capability to disable a user in Splunk, there is an Idea raised for this which you might like to upvote though - https://ideas.splunk.com/ideas/PLECID-I-682 ... See more...
Hi @Mridu27  Unfortunately there isnt a capability to disable a user in Splunk, there is an Idea raised for this which you might like to upvote though - https://ideas.splunk.com/ideas/PLECID-I-682 There are a few options to prevent users accessing Splunk, some mentioned on other answers such  as the one @kiran_panchavat  suggested (https://community.splunk.com/t5/Security/Disable-user-account-temporary/td-p/396592) however in the currently supported versions it isnt possible to remove all roles from a user, and I wouldnt recommend editing the web.conf to limit by IP as if you are disabling a user for security concerns then they still may be able to access via other IPs, and you also risk blocking out valid users. Ultimately the best solution may boil down to your specific environment, e.g. OnPrem/Splunk Cloud, Local users, LDAP or SSO/SAML. What are you using for authentication? If you are using local Splunk accounts then I would recommend creating a blank role with No capabilities and No roles inherited - This means that they cannot interact with Splunk if they attempted to login, they couldnt run a search for example. Then assign only that role to the user. However - if you are using SAML/SSO then its the SAML provider that sends the groups that the user belongs to, in this scenario you should disable the user or remove the groups from the Identity Provider, as changing these in Splunk will mean they get overridden if they logged in! Quick side note - You may see an "Active" status next to users in Splunk User list - whilst there isnt a capability to disable users, a user can be in "locked out" state if they fail to login too many times.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Sankar  Please can you check that https://splunkbase.splunk.com/app/6280 (Citrix Analytics Add-on for Splunk) doesnt solve any of your requirements? I am not too familiar with Citrix but it does... See more...
Hi @Sankar  Please can you check that https://splunkbase.splunk.com/app/6280 (Citrix Analytics Add-on for Splunk) doesnt solve any of your requirements? I am not too familiar with Citrix but it does specifically mention VDI. Aside from that, you would need to check the the output capabilities from the Citrix tooling, I suspect that it will be syslog unless it has a dedicated Splunk output. If you end up going down the syslog route then you should look into using Splunk Connect for Syslog (SC4S) or using something like rsyslog to capture the syslog feed onto the filesystem and then forward in to Splunk with a UF. I would recommend sending the data to a development environment first, ensuring that you configure the relevant props/transforms to ensure they meet your requirements before sending to your production environment.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
My original Python script accessed the FTP server directly and used the mget command to retrieve files from the monitored folder. But as mentioned by you, to try pulling the file from the FTP server... See more...
My original Python script accessed the FTP server directly and used the mget command to retrieve files from the monitored folder. But as mentioned by you, to try pulling the file from the FTP server into my local Splunk server into a different directory, before copying it on the splunk server to the monitored directory. I did a slight chance to the script to only cp after it exit from the FTP server. ftp -inv "$HOST" <<EOF >> /home/ws/fetch_debug.log 2>&1 user $USER $PASS cd $REMOTE_DIR lcd /home/ws/pull mget * bye EOF cp -v /home/ws/pull/*.json /home/ws/logs >> /home/ws/fetch_debug.log 2>&1
Hi @sureshkumaar  Further to my last reply - there are also a couple of worthwhile resources here which give an overview of how to identify and deal with blocked queues. https://docs.splunk.com/Doc... See more...
Hi @sureshkumaar  Further to my last reply - there are also a couple of worthwhile resources here which give an overview of how to identify and deal with blocked queues. https://docs.splunk.com/Documentation/Splunk/8.2.4/Deploy/Datapipeline How to Troubleshoot Blocked Ingestion Pipeline Queues with Indexers and Forwarders - https://conf.splunk.com/files/2019/slides/FN1570.pdf  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Increasing autoLBFrequency, maxQueueSize, aggQueueSize, or outputQueueSize in outputs.conf on your heavy forwarders may help temporarily reduce "blocked=true" messages, but these settings do not ... See more...
Hi Increasing autoLBFrequency, maxQueueSize, aggQueueSize, or outputQueueSize in outputs.conf on your heavy forwarders may help temporarily reduce "blocked=true" messages, but these settings do not address the root cause: your indexer(s) are overloaded and unable to keep up with incoming data. The following will tell you which queues are blocking on which servers: index=_internal source=*metrics.log blocked=true | stats count by host, group, name   "blocked=true" in metrics.log means the forwarder cannot send data to the indexer because the indexer is not accepting it fast enough (usually due to CPU, disk, or queue saturation). Increasing forwarder queue sizes only buffers more data; it does not fix indexer bottlenecks. The indexer with 99–100% CPU is a clear bottleneck. Upgrading its CPU may help, but if the load is not balanced across all indexers, you may need to investigate why (e.g., uneven load balancing, hot buckets, or misconfiguration). Lowering autoLBFrequency (e.g., from 10 to 5) can help distribute load more evenly, but will not solve indexer resource exhaustion.   Do not rely solely on queue size increases; this can delay but not prevent data loss if indexers remain overloaded. Investigate why one indexer is overloaded (check for hot buckets, network issues, or misconfigured load balancing). Understanding *why* the single indexer is blocking is probably the important thing here - it could be a number of things but likely to be either resource issue (e.g. faulty disk) or one of your syslog feeds failing to balance to another indexer. Is it always the same indexer that runs hot? Or does it change? Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
So, are you using (s)ftp to copy from one directory to the final directory or using the cp command (on the server where the monitored directory is)?
Yes, I'm accessing my FTP server using the FTP method. However, this shouldn't make a difference whether I'm using FTP or SFTP, right? I'm still encountering the same issue, even after copying the fi... See more...
Yes, I'm accessing my FTP server using the FTP method. However, this shouldn't make a difference whether I'm using FTP or SFTP, right? I'm still encountering the same issue, even after copying the file to a different folder before moving it to the monitored directory on the Splunk server. Just to add on, my file type is JSON.  [Mon Apr 21 20:28:01 +08 2025] Attempting FTP to 192.168.80.139 Connected to 192.168.80.139 (192.168.80.139). 220 (vsFTPd 3.0.3) 331 Please specify the password. 230 Login successful. 250 Directory successfully changed. Local directory now /home/ws/pull 221 Goodbye. '/home/ws/pull/###_case_final.json' -> '/home/ws/logs/###_case_final.json' [Mon Apr 21 20:28:12 +08 2025] Attempting FTP to 192.168.80.139 Connected to 192.168.80.139 (192.168.80.139). 220 (vsFTPd 3.0.3) 331 Please specify the password. 230 Login successful. 250 Directory successfully changed. Local directory now /home/ws/pull local: ###_case_final.json remote: ###_case_final.json 227 Entering Passive Mode (192,168,80,139,249,175). 150 Opening BINARY mode data connection for ###_case_final.json (1455 bytes). 226 Transfer complete. 1455 bytes received in 8.5e-05 secs (17117.65 Kbytes/sec) 221 Goodbye. '/home/ws/pull/###_case_final.json' -> '/home/ws/logs/###_case_final.json' As of now, my inputs.conf contain the following only.
Hi All,        I have 4 Heavy forwarder servers sending data through 5 indexers server1 acts as syslog server which has autoLBFrequency as 10 and maxQueueSize as 1000MB server2 acts as syslog and ... See more...
Hi All,        I have 4 Heavy forwarder servers sending data through 5 indexers server1 acts as syslog server which has autoLBFrequency as 10 and maxQueueSize as 1000MB server2 acts as syslog and heavy forwarder which has autoLBFrequency as 10 and maxQueueSize as 500MB server3 acts heavy forwarder which has autoLBFrequency as 10 and maxQueueSize as 500MB server4 acts heavy forwarder which has autoLBFrequency as 10 and maxQueueSize as 500MB    Receiving blocked=true in metrics.log while syslog/heavy forwarder trying to send data through indexer servers. Due to this index ingestion is getting delayed and data is coming to Splunk 2-3 hours late.         And in one of the 5 indexer servers CPU is always highly utilized from 99-100% consistently which has 24 CPU, other indexer servers also running with 24 CPU.          Planning to upgrade highly utilized indexer server alone from 24 to 32         Kindly suggest by updating below in outputs.conf will reduce/stop the "blocked=true" in metrics.log and CPU load on indexer will be normal before upgrading the CPU.         OR we need to do both, changes in outputs.conf and upgrading the CPU. If both can be done which is the first we can try. Kindly help. autoLBFrequency = 5 maxQueueSize = 1000MB aggQueueSize = 7000 outputQueueSize = 7000
Thanks for your response. I’ve already attempted this, but it didn’t work as expected
Splunk (monitor input to be precise) doesn't care about the checksum of the whole file. It is obvious that the hash of the whole file will change as soon as _anything_ changes within the file. Whethe... See more...
Splunk (monitor input to be precise) doesn't care about the checksum of the whole file. It is obvious that the hash of the whole file will change as soon as _anything_ changes within the file. Whether it is a complete rewrite of the whole file contents or just adding a single byte at the end - the hash will change. The monitor input stores some values regarding the state of the file. It stores the initCrc value which will obviously change if the file is overwritten (and length of which can be manipulated in settings). But it also stores the seekCrc which is a checksum of the last read 256 bytes (and a position of those 256 bytes within the file). I suppose in your case the file ends by closing the json array, but after subsequent "append", the actual array is appended so its closing bracket is removed, another json structure is added and after that the array is closed in a new place. Unfortunately, you can't do much about it. As I said before - you'd be best off by scripting some external solution to read that array and dump its contents in a sane manner to another file for reading.
Hi All, I am looking for help onboard citrix VDI logs & Citrix WAF logs into the splunk. Splunk add on not available. also we got confirmed splunk support. can anyone help & guide what is best prac... See more...
Hi All, I am looking for help onboard citrix VDI logs & Citrix WAF logs into the splunk. Splunk add on not available. also we got confirmed splunk support. can anyone help & guide what is best practice onboard citrix VDI & WAF logs. much appreciated if you have solutions.  
@Mridu27  You can either remove all roles associated with the user or simply delete the user all together. There is no way to disable accounts unfortunately. Some suggestions: Take away all thei... See more...
@Mridu27  You can either remove all roles associated with the user or simply delete the user all together. There is no way to disable accounts unfortunately. Some suggestions: Take away all their roles including user. Change the passwords on the accounts (you will need to give them new passwords when you are done) You could edit web.conf and use the acceptFrom parameter to limit logins only to specific IPs or a subnet.   acceptFrom = <network_acl> ... * Lists a set of networks or addresses from which to accept connections. * Separate multiple rules with commas or spaces. * Each rule can be in one of the following formats: 1. A single IPv4 or IPv6 address (examples: "10.1.2.3", "fe80::4a3") 2. A Classless Inter-Domain Routing (CIDR) block of addresses (examples: "10/8", "192.168.1/24", "fe80:1234/32") 3. A DNS name, possibly with a "*" used as a wildcard (examples: "myhost.example.com", "*.splunk.com") 4. "*", which matches anything * You can also prefix an entry with '!' to cause the rule to reject the connection. The input applies rules in order, and uses the first one that matches. For example, "!10.1/16, *" allows connections from everywhere except the 10.1.*.* network. * Default: "*" (accept from anywhere)  
@livehybrid  The screenshot is working fine. but if  i implement the same for multiple i am not getting result. do i miss anything in below [| makeresults | eval text_search="*$text_search$*" |... See more...
@livehybrid  The screenshot is working fine. but if  i implement the same for multiple i am not getting result. do i miss anything in below [| makeresults | eval text_search="*$text_search$*" | eval escaped=replace(text_search, "\\", "\\\\") | eval FileSource=escaped, RemoteHost=escaped, LocalPath=escaped, RemotePath=escaped | return FileSource RemoteHost LocalPath RemotePath ]
Assuming you have admin access, you can find the source types under the settings menu option. From this you can find out what extractions are configured, as I suspect these aren't dealing with your c... See more...
Assuming you have admin access, you can find the source types under the settings menu option. From this you can find out what extractions are configured, as I suspect these aren't dealing with your custom field as you expect. You could also try using the extract command ((host="*.prod.domain.com" "Carrier updates summary;") OR (index=prod_index_eks kub.pod_name="domain-*" log="*Carrier updates summary;*")) | extract | eval message=coalesce(message, log) | table message
The logs are coming from a Django application, and the sourcetype is set to the name of the application (as shown by | metasearch sourcetype=* command). This is how we are sending logs from the appli... See more...
The logs are coming from a Django application, and the sourcetype is set to the name of the application (as shown by | metasearch sourcetype=* command). This is how we are sending logs from the application logger.info('Carrier updates summary; message="The following updates message", user="john_doe", carrier_slug="example_carrier"') We are using below query for extraction ((host="*.prod.domain.com" "Carrier updates summary;") OR (index=prod_index_eks kub.pod_name="domain-*" log="*Carrier updates summary;*")) | eval message=coalesce(message, log) | table message I hope this provides some context about our logs. Apologies if it doesn’t — I’m still very new to Splunk. I really appreciate your help!