All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @ws  Let us know how you get on with the Python script. In the meantime - the file you want to edit is: $SPLUNK_HOME/etc/log.cfg (e.g. /opt/splunk/etc/log.cfg) Looks for category.<key> and chan... See more...
Hi @ws  Let us know how you get on with the Python script. In the meantime - the file you want to edit is: $SPLUNK_HOME/etc/log.cfg (e.g. /opt/splunk/etc/log.cfg) Looks for category.<key> and change the default (usually INFO) to DEBUG for those keys. You will need to restart Splunk. Then you should see further info in index=_internal component=<key> which *might* help! This should be on the forwarder picking up the logs. Dont forget to add karma/like any posts which help   Thanks Will
Hi @ganesanvc  If you do "| return FileSource RemoteHost LocalPath RemotePath" then its going to do an AND statement between these fields in your main search - is this what you want? If you want an... See more...
Hi @ganesanvc  If you do "| return FileSource RemoteHost LocalPath RemotePath" then its going to do an AND statement between these fields in your main search - is this what you want? If you want an "OR" then I think you might want to do: [| makeresults | eval text_search="*$text_search$*" | eval escaped=replace(text_search, "\\", "\\\\") | eval FileSource=escaped, RemoteHost=escaped, LocalPath=escaped, RemotePath=escaped | table FileSource RemoteHost LocalPath RemotePath | format "(" "(" "OR" ")" "OR" ")" ]   This will create something like: ( ( FileSource="\\\\Test\\\\abc\\\\test\\\\abc\\\\xxx\\\\OUT\\\\" OR LocalPath="\\\\Test\\\\abc\\\\test\\\\abc\\\\xxx\\\\OUT\\\\" OR RemoteHost="\\\\Test\\\\abc\\\\test\\\\abc\\\\xxx\\\\OUT\\\\" OR RemotePath="\\\\Test\\\\abc\\\\test\\\\abc\\\\xxx\\\\OUT\\\\" ) ) Note - I am not 100% sure how many \\ you are expecting, but when I ran your makeresults search it failed and I had to escape the the replace as: | eval escaped=replace(text_search, "\\\\", "\\\\\\\\") You can run the makeresults on its own and substitute your token to validate the output you get and ensure the search works correctly.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
I have instrumented a Kubernetes cluster in a test environment.  I have also instrumented a java application within that cluster.  Metrics are reporting for both.  However, within APM, when clicking ... See more...
I have instrumented a Kubernetes cluster in a test environment.  I have also instrumented a java application within that cluster.  Metrics are reporting for both.  However, within APM, when clicking at infrastructure at the bottom of the screen, I get no data as if I have no infrastructure configured.  What configuration am I missing to correlate the data between the two?   Clicking Infrastructure Under an Instrumented APM Service
My search query: Index=xxx <xxxxxxx> |eval Date=strftime(_time,"%Y-%m-%d") | lookup holidays.csv HolidayDate as Date output HolidayDate | eval should_alert=if((isnull(HolidayDate)), "Yes", "No"... See more...
My search query: Index=xxx <xxxxxxx> |eval Date=strftime(_time,"%Y-%m-%d") | lookup holidays.csv HolidayDate as Date output HolidayDate | eval should_alert=if((isnull(HolidayDate)), "Yes", "No") | table Date should_alert | where should_alert="Yes" So I've been trying to create an complicated alert. unfortunately it failed, and is looking for guidance. The Alert is setup is supposed to fire if there are no results OR more than 1 unless it's the day after a weekend or holiday, in which case, this is to achieve the alert to look for 0 results OR  anything other than 1 I've added below the following: Trigger conditions: Number of results is not equal to 1 so when a date appears on the Muted date(holiday.csv) I want. turns out it had 0 events that day. and the 0 events/results triggered the alert and fired on Easter date. Also when we Mute a dates does it make it return 0 events? so technically it will still fire on the dates due to my trigger condition, how can we make sure it mutes on the holiday.csv lookup file , and yet alert on 0 events that are not on the holiday.csv
@hrawat wrote: Note: As a side effect of this issue, maxKbps(limits.conf) will also be impacted as it requires thruput metrics to function. Can you elaborate on how maxKbps is impacted?
Does the script send to HEC or write to a file? If HEC - which endpoint?
@PickleRick A Python script is designed to establish a connection with the Oracle database, extract data from designated tables, and forward the retrieved data into Splunk for ingestion.
Wait a second. File or table? What kind of source does this data come from? Monitor input? Dbconnect? Have you checked the actual data with someone responsible for the source? I mean whether the ID ... See more...
Wait a second. File or table? What kind of source does this data come from? Monitor input? Dbconnect? Have you checked the actual data with someone responsible for the source? I mean whether the ID or whatever it is in your data corresponds to the right timestamp?
@livehybrid Both the servers are in the same timezone. As I already compared the timezone setting on Project A and B.
Hi , I need to move all my knowledge onjects including dashboards,Alerts ,savedsearches and lookups etc to cloud SH from onprem SH. Help me out on this please. can i move each app configs to cloud ... See more...
Hi , I need to move all my knowledge onjects including dashboards,Alerts ,savedsearches and lookups etc to cloud SH from onprem SH. Help me out on this please. can i move each app configs to cloud SH so that way i will be able to replicate the same knowledge objects . we are done with Data migration, now we need to move app and knowledge objects.
@livehybrid, ok let me test out the following method as mention to download the file as a temp file, then write the contents to the existing file. I believe this can be handled within the same Pytho... See more...
@livehybrid, ok let me test out the following method as mention to download the file as a temp file, then write the contents to the existing file. I believe this can be handled within the same Python script, which connects to the FTP server and downloads the file to my local Splunk server.   Thanks for sharing the additional information. Since I'm still learning, could you advise which log file I should be checking after enabling DEBUG for the following? Change the logging to DEBUG for the following components: TailingProcessor BatchReader WatchedFile FileTracker
@isoutamo  I'm running the following -  index = <my_index> linecount=2 | table _raw and everything shows up as one line, I don't see any sign of \n, what do I miss?  I also checked with an enco... See more...
@isoutamo  I'm running the following -  index = <my_index> linecount=2 | table _raw and everything shows up as one line, I don't see any sign of \n, what do I miss?  I also checked with an encoding tool and it doesn't show either the 13 ascii code or the 10 one within these lines.  My biggest confusion is the fact that for this sourcetype I have -   SHOULD_LINEMERGE=FALSE And therefore, how come, sometimes the events have multiple lines? 
Hi @tech_g706  The official docs are at https://docs.typo3.org/m/typo3/reference-coreapi/main/en-us/ApiOverview/Logging/Index.html if they're any use to you Let me know if theres anything else I... See more...
Hi @tech_g706  The official docs are at https://docs.typo3.org/m/typo3/reference-coreapi/main/en-us/ApiOverview/Logging/Index.html if they're any use to you Let me know if theres anything else I can help with.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @capjacksparo  Please can you confirm where the splunk_metadata.csv  is that you updated? Im not sure its possible to overwrite the defaults - other than by using the splunk_metadata.csv file. ... See more...
Hi @capjacksparo  Please can you confirm where the splunk_metadata.csv  is that you updated? Im not sure its possible to overwrite the defaults - other than by using the splunk_metadata.csv file.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @ganesanvc  Were you able to try the below? @livehybrid wrote: Hi @ganesanvc  Does "text_search" come from a search result - or is this something like a token you are passing in? I couldnt ... See more...
Hi @ganesanvc  Were you able to try the below? @livehybrid wrote: Hi @ganesanvc  Does "text_search" come from a search result - or is this something like a token you are passing in? I couldnt tell from the request but if its coming from a token and you want to apply the additional escaping then you can do this: index=main source="answersDemo" [| makeresults | eval text_search="*\\Test\abc\test\abc\xxx\OUT\*" | eval FileSource=replace(text_search, "\\\\", "\\\\\\\\") | return FileSource ]   Note: I used a sample event in index=main as you can see in the results above using; | windbag | head 1 | eval _raw="Test Event for SplunkAnswers user=Demo FileSource=\"MyFileSystem\\Test\\abc\\test\\abc\\xxx\\OUT\\test.exe\" fileType=exe" | eval source="answersDemo" | collect index=main output_format=hec I may have got the wrong end of the stick with what you're looking for here but let me know!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @uagraw01  Its very suspicious that the time looks to be ~ - 36000 seconds - being pretty much *exactly* 10 hours. Could there be an issue with timezones here? It doesnt sound like the data is b... See more...
Hi @uagraw01  Its very suspicious that the time looks to be ~ - 36000 seconds - being pretty much *exactly* 10 hours. Could there be an issue with timezones here? It doesnt sound like the data is blocked for exactly 10 hours in the data ingestion pipeline, it feels more likely that a previous server in the ingestion journey has an incorrect timezone.  The timezone set in props.conf for the given sourcetype (TZ=) will be used based on the "the timezone provided by the forwarder." - Its worth checking if the forwarder in Project B, assuming this is different to Project A?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @ws  If you are using a script to do this, it might be worth trying to change the process a little bit - instead of downloading the file and overwrite the existing file, try downloading the file ... See more...
Hi @ws  If you are using a script to do this, it might be worth trying to change the process a little bit - instead of downloading the file and overwrite the existing file, try downloading the file as a temp file, then write the contents to the existing file. This will prevent Splunk thinking it is a new file. Theres an interesting thread here https://community.splunk.com/t5/Getting-Data-In/Duplicate-indexing-of-data/m-p/376619 which might help you. Another thing you could do is change the logging to DEBUG for the following components: TailingProcessor BatchReader WatchedFile FileTracker Then see what Splunk logs the next time you update the file.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @Mridu27  Unfortunately there isnt a capability to disable a user in Splunk, there is an Idea raised for this which you might like to upvote though - https://ideas.splunk.com/ideas/PLECID-I-682 ... See more...
Hi @Mridu27  Unfortunately there isnt a capability to disable a user in Splunk, there is an Idea raised for this which you might like to upvote though - https://ideas.splunk.com/ideas/PLECID-I-682 There are a few options to prevent users accessing Splunk, some mentioned on other answers such  as the one @kiran_panchavat  suggested (https://community.splunk.com/t5/Security/Disable-user-account-temporary/td-p/396592) however in the currently supported versions it isnt possible to remove all roles from a user, and I wouldnt recommend editing the web.conf to limit by IP as if you are disabling a user for security concerns then they still may be able to access via other IPs, and you also risk blocking out valid users. Ultimately the best solution may boil down to your specific environment, e.g. OnPrem/Splunk Cloud, Local users, LDAP or SSO/SAML. What are you using for authentication? If you are using local Splunk accounts then I would recommend creating a blank role with No capabilities and No roles inherited - This means that they cannot interact with Splunk if they attempted to login, they couldnt run a search for example. Then assign only that role to the user. However - if you are using SAML/SSO then its the SAML provider that sends the groups that the user belongs to, in this scenario you should disable the user or remove the groups from the Identity Provider, as changing these in Splunk will mean they get overridden if they logged in! Quick side note - You may see an "Active" status next to users in Splunk User list - whilst there isnt a capability to disable users, a user can be in "locked out" state if they fail to login too many times.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Sankar  Please can you check that https://splunkbase.splunk.com/app/6280 (Citrix Analytics Add-on for Splunk) doesnt solve any of your requirements? I am not too familiar with Citrix but it does... See more...
Hi @Sankar  Please can you check that https://splunkbase.splunk.com/app/6280 (Citrix Analytics Add-on for Splunk) doesnt solve any of your requirements? I am not too familiar with Citrix but it does specifically mention VDI. Aside from that, you would need to check the the output capabilities from the Citrix tooling, I suspect that it will be syslog unless it has a dedicated Splunk output. If you end up going down the syslog route then you should look into using Splunk Connect for Syslog (SC4S) or using something like rsyslog to capture the syslog feed onto the filesystem and then forward in to Splunk with a UF. I would recommend sending the data to a development environment first, ensuring that you configure the relevant props/transforms to ensure they meet your requirements before sending to your production environment.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
My original Python script accessed the FTP server directly and used the mget command to retrieve files from the monitored folder. But as mentioned by you, to try pulling the file from the FTP server... See more...
My original Python script accessed the FTP server directly and used the mget command to retrieve files from the monitored folder. But as mentioned by you, to try pulling the file from the FTP server into my local Splunk server into a different directory, before copying it on the splunk server to the monitored directory. I did a slight chance to the script to only cp after it exit from the FTP server. ftp -inv "$HOST" <<EOF >> /home/ws/fetch_debug.log 2>&1 user $USER $PASS cd $REMOTE_DIR lcd /home/ws/pull mget * bye EOF cp -v /home/ws/pull/*.json /home/ws/logs >> /home/ws/fetch_debug.log 2>&1