All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you. I can confirm that an unistallation of Universal Forwarder 9.1.2 and an installation of Uinversal Forwarder 9.1.3 works without issues.
If the lookup is explicitly generated with the outputlookup command, you can try something like | rest /servicesNS/-/-/saved/searches | regex search="outputlookup[^|]+<your_lookup_name>" | table t... See more...
If the lookup is explicitly generated with the outputlookup command, you can try something like | rest /servicesNS/-/-/saved/searches | regex search="outputlookup[^|]+<your_lookup_name>" | table title eai:acl.app search
I want to know which saved search is generating a particular lookup , How do I do that?
Drop spath.  Splunk is already giving you field values.  Adding spath as illustrated in your example will only give each field a duplicate value.  When your log source is JSON, spath can be used to e... See more...
Drop spath.  Splunk is already giving you field values.  Adding spath as illustrated in your example will only give each field a duplicate value.  When your log source is JSON, spath can be used to extract from a specific field that embeds an escaped JSON, or to extract value of a specific path.  | spath input=_raw does neither.
Hi Splunkers,   i already done configuration of HF and install uf credentials. but i can't see the logs of palo alto in Splunk Cloud    for HF   Inputs.conf [udp://5000] index = xxxxx_pan di... See more...
Hi Splunkers,   i already done configuration of HF and install uf credentials. but i can't see the logs of palo alto in Splunk Cloud    for HF   Inputs.conf [udp://5000] index = xxxxx_pan disabled = false sourcetype = pan_log   but HF and Splunk Cloud instance have communicating.    please help me 
Hi All, I've been exploring various documentation and tutorials, but I'd love to hear from those who have hands-on experience. What are the best practices and recommended steps for configuring Kuber... See more...
Hi All, I've been exploring various documentation and tutorials, but I'd love to hear from those who have hands-on experience. What are the best practices and recommended steps for configuring Kubernetes logs to seamlessly integrate with Splunk Enterprise? Are there any specific considerations or challenges I should be aware of during the setup process? Thanks in advance for sharing your expertise!
Hi Mentors, I have searched in youtube, external sources to check for usecase creation. i could see by using splunk essential we could create the usecase but i am new to the splunk and know about ... See more...
Hi Mentors, I have searched in youtube, external sources to check for usecase creation. i could see by using splunk essential we could create the usecase but i am new to the splunk and know about only the basics of splunk like fields and commands etc. i have asked even literally everyone treated me in a bad way when i ask them to teach me how to create the usecase even my team leader is also not ready to teach me but he got learned in the institution 5 years back. i dont have any friends who has knowledge in splunk.  I beg u please any one please teach me how to create the usecase in splunk and what is the basic of creating the usecase. i need this ......i am a first graduate from my family and i cannot afford huge amount of fees to learn splunk. Any mentors please help me i want to learn splunk and i want to teach the same to my team members in a simple in which way they could understand....... Please help me Mentors....
I am also encountering issue when I'm doing a upgrade from 9.1.2. Coming to the end it will roll back and fail.. A fresh install works fine...   Kindly advise.  
We are looking into Splunk Cloud as a solution, instead of our regular Splunk Enterprise (On Premise) Setup. To be able to test the feasibility of sending data from external sources (Jira Cloud, Red... See more...
We are looking into Splunk Cloud as a solution, instead of our regular Splunk Enterprise (On Premise) Setup. To be able to test the feasibility of sending data from external sources (Jira Cloud, Redmine) , I wanted to install our own Custom App for testing. Unfortunately, there is no current way for us to do that in the Free Trial I have. We simply want to test if we can index data from Jira Cloud to Splunk Cloud, without having to use a Heavy Forwarder or Universal Forwarder. Ways to replicate: Splunk Cloud Login > Manage Apps > No button for uploading a custom app / add-on. Questions:  1. Is there a way to directly install Custom Apps / Add-ons (that are originally built for Splunk Enterprise), in Splunk Cloud? We were thinking about compatibility issues, and if the apps would work the same way.  2. Is there a way to gauge whether or not the quantity of data that we want to send from external sources, would require us to install a Heavy / Universal Forwarder? (We are trying to avoid additional costs by taking Splunk Cloud, so we were wondering if we could do without them)  
Hi all, I have read through the splunk documentation for session timeout here, but these seems to be for splunk overall. Configure user session timeouts - Splunk Documentation However I am not a... See more...
Hi all, I have read through the splunk documentation for session timeout here, but these seems to be for splunk overall. Configure user session timeouts - Splunk Documentation However I am not able to make the user timeout to be user or role specific. Is there any solution for that? I read in one of the posts previously that someone is able to perform such role-specific wise which also works for me if it can be done: Session timeout for a Single username (not group) ... - Splunk Community However there wasn't much information on how he/she was able to do so. The main intent for me is to set admin users / normal users the usual timeout eg 1 hour, and dashboard accounts to not be logged out on the other hand. Any possible advice, as it seems to be doable.
Use msiexec /i .... /l*vx logfile.txt and look for "value 3" in that file. Just above that will show more information about the installation failure.
Run both queries then combine the results using stats. | dbxquery query="select * from employee where companyID in (A,B,C)" | append [search index=company] | stats values(*) as * by Company ID  
How to correlate index with dbxquery with condition or interation? See the sample below.   Thank you for your help. index=company CompanyID CompanyName Revenue A CompanyA 3,000,000 ... See more...
How to correlate index with dbxquery with condition or interation? See the sample below.   Thank you for your help. index=company CompanyID CompanyName Revenue A CompanyA 3,000,000 B CompanyB 2,000,000 C CompanyC 1,000,000 |  dbxquery query="select * from employee where companyID in (A,B,C)" OR  Iteration: |  dbxquery query="select * from employee where companyID ='A' |  dbxquery query="select * from employee where companyID ='B' |  dbxquery query="select * from employee where companyID ='B' CompanyID EmployeeName EmployeeEmail A EmployeeA1 empA1@email.com A EmployeeA2 empA2@email.com A EmployeeA3 empA2@email.com B EmployeeB1 empB1@email.com B EmployeeB2 empB2@email.com B EmployeeB3 empB3@email.com C EmployeeC1 empC1@email.com C EmployeeC2 empC2@email.com C EmployeeC3 empC3@email.com Expected result: CompanyID CompanyName Revenue EmployeeName EmployeeEmail A CompanyA 3,000,000 EmployeeA1 empA1@email.com A CompanyA 3,000,000 EmployeeA2 empA2@email.com A CompanyA 3,000,000 EmployeeA3 empA2@email.com B CompanyB 2,000,000 EmployeeB1 empB1@email.com B CompanyB 2,000,000 EmployeeB2 empB2@email.com B CompanyB 2,000,000 EmployeeB3 empB3@email.com C CompanyC 1,000,000 EmployeeC1 empC1@email.com C CompanyC 1,000,000 EmployeeC2 empC2@email.com C CompanyC 1,000,000 EmployeeC3 empC3@email.com OR  CompanyID CompanyName Revenue EmployeeName EmployeeEmail A CompanyA 3,000,000 EmployeeA1, EmployeeA2, EmployeeA3 empA1@email.com, empA2@email.com, empA2@email.com B CompanyB 2,000,000 EmployeeB1, EmployeeB2, EmployeeB3 empB1@email.com, empB2@email.com, empB3@email.com C CompanyC 1,000,000 EmployeeC1, EmployeeC2, EmployeeC3 empC1@email.com, empC2@email.com, empC3@email.com
Thanks very much for the solution @yuanliu Much appreciated!
Hi Woodcock, Do you know if this is still the case nowadays (2024)? thanks.    
Hi. I think you may be hitting the dispatch.ttl setting https://community.splunk.com/t5/Splunk-Search/What-exactly-does-the-ttl-mechanism-do/td-p/446152 Use advanced edit on your search and see... See more...
Hi. I think you may be hitting the dispatch.ttl setting https://community.splunk.com/t5/Splunk-Search/What-exactly-does-the-ttl-mechanism-do/td-p/446152 Use advanced edit on your search and see what yours is set to.
hi @inventsekar Thank you , you are right,  some events not have that particular "log_processed.message". when i put | spath input=_raw i am seeing the events in table format but also seeing the du... See more...
hi @inventsekar Thank you , you are right,  some events not have that particular "log_processed.message". when i put | spath input=_raw i am seeing the events in table format but also seeing the duplicate events. can we avoid that. index="sample" "log_processed.app"=mercury "log_processed.traceId"=dc57c0b7f0e8cfdee5002b62873f5de7 | spath input=_raw | table _time, log_processed.message
You have a common field, just not a common name.  That's easy to fix using the coalesce function. index=foo (UserID=* OR ID=*) | eval commonID=coalesce(UserID, ID) | stats min(_time) as startTime, m... See more...
You have a common field, just not a common name.  That's easy to fix using the coalesce function. index=foo (UserID=* OR ID=*) | eval commonID=coalesce(UserID, ID) | stats min(_time) as startTime, max(_time) as endTime, values(*) as * by commonID | eval diff=endTime - startTime  
Hi, I stumbled across this while searching the same error, and thought I'd provide an answer in case someone else comes along from the first hit in their favorite search engine.  Have you tried d... See more...
Hi, I stumbled across this while searching the same error, and thought I'd provide an answer in case someone else comes along from the first hit in their favorite search engine.  Have you tried doing exactly what it says to do? Specifically, here's the process on my own machine.  Note I'm starting out in /opt/splunk/etc/auth.  From there, let's find the path of the sslRootCAPath file specified in server.conf splunk@curie:/opt/splunk/etc/auth$ grep sslRootCAPath ../system/local/server.conf sslRootCAPath = /opt/splunk/etc/auth/mycerts/chain.pem  Now that I know the active cert, I can make a backup copy of it just in case splunk@curie:/opt/splunk/etc/auth$ cp mycerts/chain.pem mycerts/chain.pem.2024-01-24 then append appsCA.pem to that file and restart Splunk. splunk@curie:/opt/splunk/etc/auth$ cat appsCA.pem >> mycerts/chain.pem splunk@curie:/opt/splunk/etc/auth$ splunk restart Worked like a charm.
I'm currently using the token $results_link$ to get a direct link to alerts when they get triggered. I've also set the "Expires" field to 72 hrs. However, if the alerts get triggered over the weekend... See more...
I'm currently using the token $results_link$ to get a direct link to alerts when they get triggered. I've also set the "Expires" field to 72 hrs. However, if the alerts get triggered over the weekend, the results are always expired when checking them after 48 hours. Is it possibe to have the alert results not expire in 48hrs?