All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello I tried to change a Custom App name (e.g BRB_App to CAA_App) on the Deployer through the Cli but i realize that the name change only affects the Folder name not the name of the App when i chec... See more...
Hello I tried to change a Custom App name (e.g BRB_App to CAA_App) on the Deployer through the Cli but i realize that the name change only affects the Folder name not the name of the App when i checked the UI. Is there a way to effect that change to affect the Name of the custom App and not just the folder name 
When copy-pasting from chatgpt you forgot to include the rest of the "answer". [...] Here are some additional tips: Check printer documentation: Start by checking the documentation for your p... See more...
When copy-pasting from chatgpt you forgot to include the rest of the "answer". [...] Here are some additional tips: Check printer documentation: Start by checking the documentation for your printers to see if they support forwarding logs, and if so, how to configure it. Test in a lab environment: Before implementing this in a production environment, it's a good idea to test the setup in a lab environment to ensure everything works as expected. Security considerations: Make sure to consider security implications, especially when configuring devices to forward logs to other systems. Ensure that communication between the printers, print server, and Splunk instance is secure. Consult Splunk documentation: Splunk documentation is comprehensive and can provide detailed guidance on setting up forwarders and configuring inputs. By following these steps and considering the tips provided, you should be able to set up a system where printer logs are forwarded to Splunk via an intermediate print server. If you encounter any specific issues or have further questions, feel free to ask! [...]   Come on, people. What are you trying to achieve by posting such generic chatgpt-generated responses? This doesn't solve anything but only "dilutes" quality of responses on Answers.
Squid is not part of Splunk Enterprise installation. So if you're hitting squid it means either it is working as a reverse-proxy for your target service or you connect to it in order to perform the o... See more...
Squid is not part of Splunk Enterprise installation. So if you're hitting squid it means either it is working as a reverse-proxy for your target service or you connect to it in order to perform the outbound connection. Also - if your proxy is doing TLS inspection, cert-based mutual authentication won't work unless you create an exception in your inspection policy.
Thank you for the inputs. I checked and this is not the root cause. I need to identify the root cause to prevent such cases happening in the future.
That is what the stats command does. Use the by keyword to group results based on the values of certain fields. | stats sum(Success) as Success, sum(Failed) as Failed by Application | eval Total=Suc... See more...
That is what the stats command does. Use the by keyword to group results based on the values of certain fields. | stats sum(Success) as Success, sum(Failed) as Failed by Application | eval Total=Success + Failed | eval percentage=round(Failed*100/Total, 3)  
Hi Ryan, we have not found a workaround.   We are in the process of contacting AppDynamics.  
Thank you for your response @PickleRick. I tried running curl in verbose mode. After successful connection to proxy, I am getting below error but am unable to locate squid.conf file. X-Squid-Error: ... See more...
Thank you for your response @PickleRick. I tried running curl in verbose mode. After successful connection to proxy, I am getting below error but am unable to locate squid.conf file. X-Squid-Error: ERR_ACCESS_DENIED 0
@gcusello  We had two requirements for the same sourcetype. One involved line breaks, and the other required password masking during ingestion. As our Search heads are managed by Splunk Support and ... See more...
@gcusello  We had two requirements for the same sourcetype. One involved line breaks, and the other required password masking during ingestion. As our Search heads are managed by Splunk Support and hosted in the Cloud, we created a custom app and deployed the props.conf in the default directory. After uploading the apps for the cloud vetting process, they were successfully installed. However, I've noticed that the logs are now being separated into individual events, which is acceptable, but the passwords are still visible and haven't been masked according to our requirement. I'm unsure about where exactly I may have missed it.   This is the props.conf file for reference.  [sourcetype] SHOULD_LINEMERGE = false SEDCMD = s/password: ([^;]+);cpassword: ([^;]+);/password: (####);cpassword: (####);/gm   Sample log for reference:  [2024-03-01_06:32:08] INFO : REQUEST: User:abc CreateUser POST: name: xyz;email: abc@gmail.com;password: xyz@123;cpassword: xyz@123;role: Administrator; So kindly help on this requirement.
It may depend on the OS version.  In mine when I did dpkg -l | grep xz, that's the only one I see.  I thought about xz*.  That might be a better play here.  is lib different from util or just di... See more...
It may depend on the OS version.  In mine when I did dpkg -l | grep xz, that's the only one I see.  I thought about xz*.  That might be a better play here.  is lib different from util or just different names per OS?   thanks for the feedback!
Hi, I'm looking for a way to connect the SPLUNK to a ODCB data base, so the Splunk will be able to pull any data needed from that data base. So far, I have been told that the SPLUNK is working with... See more...
Hi, I'm looking for a way to connect the SPLUNK to a ODCB data base, so the Splunk will be able to pull any data needed from that data base. So far, I have been told that the SPLUNK is working with JDBC and the other product is working with ODBC, so there is no way to make that connection. Can someone tell me otherwise?  
Good day All, We have enabled the searches as durable searches. In our environment due to any one or other activity the scheduled search may skip or go in delegate_remote_error or go in delegate_re... See more...
Good day All, We have enabled the searches as durable searches. In our environment due to any one or other activity the scheduled search may skip or go in delegate_remote_error or go in delegate_remote_completion with success=0. In those cases I wanted to take the last status of that scheduled search + scheduled time and check if that time period accomodates in upcoming durable_cursor. How to achieve this? I tried with below one but this just fits for successful ones. How to get the failed ones too. I am running the subsearch to take the savedsearches with scheduled time which is not success in the last 7 hours and taking those for further search to check if that durable_cursor has taken up for the next run and if it is success. Is this right approach. Or any other alternate approach available? index=_internal sourcetype=scheduler [search index=_internal sourcetype=scheduler earliest=-7h@h latest=now | stats latest(status) as FirstStatus by scheduled_time savedsearch_name | search NOT FirstStatus IN ("success","delegated_remote") | eval Flag=if(FirstStatus="delegated_remote_completion" OR FirstStatus="delegated_remote_error",scheduled_time,"NO VALUE") | fields Flag savedsearch_name | rename Flag as durable_cursor ] | stats values(status) as FinalStatus values(durable_cursor) as durable_cursor by savedsearch_name scheduled_time
Ok Thanks @ITWhisperer 
I have a dashboard named dashboard1 created using splunk dashboard studio. It has dropdown as below :   Based on the selection here, it displays line graph. Example : AWSAccount selected, l... See more...
I have a dashboard named dashboard1 created using splunk dashboard studio. It has dropdown as below :   Based on the selection here, it displays line graph. Example : AWSAccount selected, line graph contains different account ids. (xyseries, $Account_Selection$,costing) There is another dashboard  (dashboard2) which has details of the AWSAccounts. It has two dropdowns again- awsaccountid, accountname. My requirement is Example :  when I click on any line in the dashboard1 (where each line is different AWSAccount). It should direct to dashboard2 with this AWSAccount selected in dropdown of dashboard2. Similarly if AWSAccountName  is selected in dropdown and any account is clicked in the panel, it should direct to dashoard2 with the same AWSAccountName in the dashboard2 dropdown.  If AWSAccount is selected and clicked, accountname in dashboardw will have default value "All" i.e *. viceversa if AWSAccountName selected. What I have done ? Under interactions , I selected link to dashboard with dashboard2 and gave Token name = account_id (this is the token set in dashboard 2) Value = Name         "eventHandlers": [ { "type": "drilldown.linkToDashboard", "options": { "app": "Search", "dashboard": "dashboard2", "newTab": true, "tokens": [ { "token": "awsaccountid", "value": "name" } ] } } ] "eventHandlers": [ { "type": "drilldown.linkToDashboard", "options": { "app": "search", "dashboard": "dashboard2", "tokens": [ { "token": "awsaccountid", "value": "row.AWSAccount.value" } ], "newTab": true } } ]         This is working fine. But if I add AWSAccountName under tokens and pass value to it like above, it is causing AWSAccount value to be displayed in "awsaccountid" and "accountname" of dashboard2. Please can anyone of you help me how to implement this. Regards, PNV    
@ITWhisperer  I think this is the best suitable answer for my question as you posted earlier. "it looks like the reports that were run on Feb 29th were done manually / ad hoc to back-fill the su... See more...
@ITWhisperer  I think this is the best suitable answer for my question as you posted earlier. "it looks like the reports that were run on Feb 29th were done manually / ad hoc to back-fill the summary index for the earlier weeks before the schedule was set up and running correctly."    
1. Run curl with -v to see its operation verbosely. Most probably you're trying to read cryptographic material from a directory you don't have access to. 2. In order to use client certificates you c... See more...
1. Run curl with -v to see its operation verbosely. Most probably you're trying to read cryptographic material from a directory you don't have access to. 2. In order to use client certificates you can do it like this: https://requests.readthedocs.io/en/latest/user/advanced/#client-side-certificates
Thanks, That made me dig in the right place, leading to ... https://splunk.my.site.com/customer/s/article/User-is-getting-an-error-message-when  Essentially, ... it was found that all the lookups... See more...
Thanks, That made me dig in the right place, leading to ... https://splunk.my.site.com/customer/s/article/User-is-getting-an-error-message-when  Essentially, ... it was found that all the lookups present in the app “Splunk_Security_Essentials” are added in denylist by default. Resolution to the error is to add local=true at the end of SPL command as below: ... | lookup isWindowsSystemFile_lookup filename local=true The indexers need a read-only copy of the knowledge bundle in order to run searches. Splunk Security Essentials brings a significant amount of data that does not need to be copied to the search heads. Adding "local=true", forces the lookup to run on the search head and not on any remote peer. That's ok for my purposes I think.
Are you sure you wanted old value of get as old_put? Also, you can just do your condition as | where command to find only those matching results. Then you'd trigger alert only if you had any results... See more...
Are you sure you wanted old value of get as old_put? Also, you can just do your condition as | where command to find only those matching results. Then you'd trigger alert only if you had any results at all.
These are consistent with the info_search_time graphic you shared earlier - is that what you are asking?
I am trying to call a 3rd party API which supports Certificate and Key based authentication. I have an on-prem instance of Splunk (Version: 9.0.2) running on a VM. I have verified the API response on... See more...
I am trying to call a 3rd party API which supports Certificate and Key based authentication. I have an on-prem instance of Splunk (Version: 9.0.2) running on a VM. I have verified the API response on the VM via curl command (Command used: curl --cert <"path to .crt file"> --key <"path to .key file"> --header "Authorization: <token>" --request GET <"url">) which gives response for a normal user. However, when running the same curl command using shell in Splunk Add-on Builder's Modular Data Inputs, the command only works with "sudo" otherwise it gives Error 403. When checked with "whoami", it returns the user as root. Question 1: Why is the curl command not working without using sudo even when the user is root. Is there any configuration that I need to modify to make it work without using sudo. Question 2: How do I make the same API call using Python code in Modular Data Inputs of Splunk Add-on Builder.
Let me know if anything else is needed