All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That is what the stats command does. Use the by keyword to group results based on the values of certain fields. | stats sum(Success) as Success, sum(Failed) as Failed by Application | eval Total=Suc... See more...
That is what the stats command does. Use the by keyword to group results based on the values of certain fields. | stats sum(Success) as Success, sum(Failed) as Failed by Application | eval Total=Success + Failed | eval percentage=round(Failed*100/Total, 3)  
Hi Ryan, we have not found a workaround.   We are in the process of contacting AppDynamics.  
Thank you for your response @PickleRick. I tried running curl in verbose mode. After successful connection to proxy, I am getting below error but am unable to locate squid.conf file. X-Squid-Error: ... See more...
Thank you for your response @PickleRick. I tried running curl in verbose mode. After successful connection to proxy, I am getting below error but am unable to locate squid.conf file. X-Squid-Error: ERR_ACCESS_DENIED 0
@gcusello  We had two requirements for the same sourcetype. One involved line breaks, and the other required password masking during ingestion. As our Search heads are managed by Splunk Support and ... See more...
@gcusello  We had two requirements for the same sourcetype. One involved line breaks, and the other required password masking during ingestion. As our Search heads are managed by Splunk Support and hosted in the Cloud, we created a custom app and deployed the props.conf in the default directory. After uploading the apps for the cloud vetting process, they were successfully installed. However, I've noticed that the logs are now being separated into individual events, which is acceptable, but the passwords are still visible and haven't been masked according to our requirement. I'm unsure about where exactly I may have missed it.   This is the props.conf file for reference.  [sourcetype] SHOULD_LINEMERGE = false SEDCMD = s/password: ([^;]+);cpassword: ([^;]+);/password: (####);cpassword: (####);/gm   Sample log for reference:  [2024-03-01_06:32:08] INFO : REQUEST: User:abc CreateUser POST: name: xyz;email: abc@gmail.com;password: xyz@123;cpassword: xyz@123;role: Administrator; So kindly help on this requirement.
It may depend on the OS version.  In mine when I did dpkg -l | grep xz, that's the only one I see.  I thought about xz*.  That might be a better play here.  is lib different from util or just di... See more...
It may depend on the OS version.  In mine when I did dpkg -l | grep xz, that's the only one I see.  I thought about xz*.  That might be a better play here.  is lib different from util or just different names per OS?   thanks for the feedback!
Hi, I'm looking for a way to connect the SPLUNK to a ODCB data base, so the Splunk will be able to pull any data needed from that data base. So far, I have been told that the SPLUNK is working with... See more...
Hi, I'm looking for a way to connect the SPLUNK to a ODCB data base, so the Splunk will be able to pull any data needed from that data base. So far, I have been told that the SPLUNK is working with JDBC and the other product is working with ODBC, so there is no way to make that connection. Can someone tell me otherwise?  
Good day All, We have enabled the searches as durable searches. In our environment due to any one or other activity the scheduled search may skip or go in delegate_remote_error or go in delegate_re... See more...
Good day All, We have enabled the searches as durable searches. In our environment due to any one or other activity the scheduled search may skip or go in delegate_remote_error or go in delegate_remote_completion with success=0. In those cases I wanted to take the last status of that scheduled search + scheduled time and check if that time period accomodates in upcoming durable_cursor. How to achieve this? I tried with below one but this just fits for successful ones. How to get the failed ones too. I am running the subsearch to take the savedsearches with scheduled time which is not success in the last 7 hours and taking those for further search to check if that durable_cursor has taken up for the next run and if it is success. Is this right approach. Or any other alternate approach available? index=_internal sourcetype=scheduler [search index=_internal sourcetype=scheduler earliest=-7h@h latest=now | stats latest(status) as FirstStatus by scheduled_time savedsearch_name | search NOT FirstStatus IN ("success","delegated_remote") | eval Flag=if(FirstStatus="delegated_remote_completion" OR FirstStatus="delegated_remote_error",scheduled_time,"NO VALUE") | fields Flag savedsearch_name | rename Flag as durable_cursor ] | stats values(status) as FinalStatus values(durable_cursor) as durable_cursor by savedsearch_name scheduled_time
Ok Thanks @ITWhisperer 
I have a dashboard named dashboard1 created using splunk dashboard studio. It has dropdown as below :   Based on the selection here, it displays line graph. Example : AWSAccount selected, l... See more...
I have a dashboard named dashboard1 created using splunk dashboard studio. It has dropdown as below :   Based on the selection here, it displays line graph. Example : AWSAccount selected, line graph contains different account ids. (xyseries, $Account_Selection$,costing) There is another dashboard  (dashboard2) which has details of the AWSAccounts. It has two dropdowns again- awsaccountid, accountname. My requirement is Example :  when I click on any line in the dashboard1 (where each line is different AWSAccount). It should direct to dashboard2 with this AWSAccount selected in dropdown of dashboard2. Similarly if AWSAccountName  is selected in dropdown and any account is clicked in the panel, it should direct to dashoard2 with the same AWSAccountName in the dashboard2 dropdown.  If AWSAccount is selected and clicked, accountname in dashboardw will have default value "All" i.e *. viceversa if AWSAccountName selected. What I have done ? Under interactions , I selected link to dashboard with dashboard2 and gave Token name = account_id (this is the token set in dashboard 2) Value = Name         "eventHandlers": [ { "type": "drilldown.linkToDashboard", "options": { "app": "Search", "dashboard": "dashboard2", "newTab": true, "tokens": [ { "token": "awsaccountid", "value": "name" } ] } } ] "eventHandlers": [ { "type": "drilldown.linkToDashboard", "options": { "app": "search", "dashboard": "dashboard2", "tokens": [ { "token": "awsaccountid", "value": "row.AWSAccount.value" } ], "newTab": true } } ]         This is working fine. But if I add AWSAccountName under tokens and pass value to it like above, it is causing AWSAccount value to be displayed in "awsaccountid" and "accountname" of dashboard2. Please can anyone of you help me how to implement this. Regards, PNV    
@ITWhisperer  I think this is the best suitable answer for my question as you posted earlier. "it looks like the reports that were run on Feb 29th were done manually / ad hoc to back-fill the su... See more...
@ITWhisperer  I think this is the best suitable answer for my question as you posted earlier. "it looks like the reports that were run on Feb 29th were done manually / ad hoc to back-fill the summary index for the earlier weeks before the schedule was set up and running correctly."    
1. Run curl with -v to see its operation verbosely. Most probably you're trying to read cryptographic material from a directory you don't have access to. 2. In order to use client certificates you c... See more...
1. Run curl with -v to see its operation verbosely. Most probably you're trying to read cryptographic material from a directory you don't have access to. 2. In order to use client certificates you can do it like this: https://requests.readthedocs.io/en/latest/user/advanced/#client-side-certificates
Thanks, That made me dig in the right place, leading to ... https://splunk.my.site.com/customer/s/article/User-is-getting-an-error-message-when  Essentially, ... it was found that all the lookups... See more...
Thanks, That made me dig in the right place, leading to ... https://splunk.my.site.com/customer/s/article/User-is-getting-an-error-message-when  Essentially, ... it was found that all the lookups present in the app “Splunk_Security_Essentials” are added in denylist by default. Resolution to the error is to add local=true at the end of SPL command as below: ... | lookup isWindowsSystemFile_lookup filename local=true The indexers need a read-only copy of the knowledge bundle in order to run searches. Splunk Security Essentials brings a significant amount of data that does not need to be copied to the search heads. Adding "local=true", forces the lookup to run on the search head and not on any remote peer. That's ok for my purposes I think.
Are you sure you wanted old value of get as old_put? Also, you can just do your condition as | where command to find only those matching results. Then you'd trigger alert only if you had any results... See more...
Are you sure you wanted old value of get as old_put? Also, you can just do your condition as | where command to find only those matching results. Then you'd trigger alert only if you had any results at all.
These are consistent with the info_search_time graphic you shared earlier - is that what you are asking?
I am trying to call a 3rd party API which supports Certificate and Key based authentication. I have an on-prem instance of Splunk (Version: 9.0.2) running on a VM. I have verified the API response on... See more...
I am trying to call a 3rd party API which supports Certificate and Key based authentication. I have an on-prem instance of Splunk (Version: 9.0.2) running on a VM. I have verified the API response on the VM via curl command (Command used: curl --cert <"path to .crt file"> --key <"path to .key file"> --header "Authorization: <token>" --request GET <"url">) which gives response for a normal user. However, when running the same curl command using shell in Splunk Add-on Builder's Modular Data Inputs, the command only works with "sudo" otherwise it gives Error 403. When checked with "whoami", it returns the user as root. Question 1: Why is the curl command not working without using sudo even when the user is root. Is there any configuration that I need to modify to make it work without using sudo. Question 2: How do I make the same API call using Python code in Modular Data Inputs of Splunk Add-on Builder.
Let me know if anything else is needed
Hi guys, I don't know if you already done this, but could you please help ? I'm trying to create a new and simple datepicker where you just choose a date and next click in a button "Submit button" ... See more...
Hi guys, I don't know if you already done this, but could you please help ? I'm trying to create a new and simple datepicker where you just choose a date and next click in a button "Submit button" and show me the results.  I already created the datepicker but it dosen't do anything.  I'm trying, and tried to follow one similar example here but it isn't the same.  
@ITWhisperer Manual runs of the search and to collect into summary index create those stash files. It is unrelated to the occurrence of duplicate events. The allocation of all sources is equal (25%)... See more...
@ITWhisperer Manual runs of the search and to collect into summary index create those stash files. It is unrelated to the occurrence of duplicate events. The allocation of all sources is equal (25%) , as you can see below. Is that correct ?  
The stash files are usually created by the collect command. Depending on your retention settings, you may be able to find out who ran the report from your _audit index.
Finally, the key piece of information! You are expecting this to be an Excel date value. | makeresults | eval date=45123 | eval _time=(date-25567-2)*24*60*60 Excel uses dates based on the start of ... See more...
Finally, the key piece of information! You are expecting this to be an Excel date value. | makeresults | eval date=45123 | eval _time=(date-25567-2)*24*60*60 Excel uses dates based on the start of the 20th Century 1900-01-01, counting in days, whereas, Splunk uses unix-style times based on seconds since 1970-01-01, so, you need to subtract the number of days between these two baseline points, and multiply by the number of seconds in a day. Note that Excel may not be calculating the date correctly since it indexes the first day as 1 (instead of 0) and incorrectly assumes that 1900 was a leap year (which it wasn't), hence the extra -2 days in the calculation. Having said that, you will have to decide whether the _time value returned is correct based on the source of your data i.e. it could be a couple of days out.