All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, after checking this with Splunk Support Team - it was just a case of adding IP (of system where PowerBI Desktop is hosted) to allow list under Server Settings->IP Allow List Management-> "Sea... See more...
Hello, after checking this with Splunk Support Team - it was just a case of adding IP (of system where PowerBI Desktop is hosted) to allow list under Server Settings->IP Allow List Management-> "Search Head API Access". Thank you. Regards, Madhav
--updated this question to achieve the same behavior on DS Dashboard Hello,  I have a table viz on my dashboards (simple XML and DS Dashboards) - sample data as given below.     | makeresults ... See more...
--updated this question to achieve the same behavior on DS Dashboard Hello,  I have a table viz on my dashboards (simple XML and DS Dashboards) - sample data as given below.     | makeresults format=csv data="cust, sla Cust1,85 Cust2,96 Cust3,99 Cust4,89 Cust5,100" | fields cust, sla       How can I colour code "sla" column based on given conditions in both Simple XML (without using javascript) and DS dashboards? if (cust IN (Cust1,Cust3,Cust4) AND sla>=90) OR (cust IN (Cust2,Cust5) AND sla>=95) -> Green if (cust IN (Cust1,Cust3,Cust4) AND sla>=85 AND sla<90) OR (cust IN (Cust2,Cust5) AND sla>=90 AND sla<95) -> Amber if (cust IN (Cust1,Cust3,Cust4) AND sla<85) OR (cust IN (Cust2,Cust5) AND sla<90) -> Red Thank you. Regards, Madhav  
@pdpsplunk100 @sundareshr @ppablo Is there any update for achieving the same in splunk 9.0.5.1 version?
Hi Everyone, We're in the process of updating the SSL certificates on our Splunk servers. However, when attempting the upgrade, we encounter the following error: "Cannot decrypt private key in "/op... See more...
Hi Everyone, We're in the process of updating the SSL certificates on our Splunk servers. However, when attempting the upgrade, we encounter the following error: "Cannot decrypt private key in "/opt/splunk/etc/apps/*/local/Splunk.key" without a password. Network communication with splunkweb may fail or hang. Consider using an unencrypted private key for Splunkweb's SSL certificate." Could anyone provide assistance with this issue? Below are the steps we followed while generating the certificate. Please let us know if you spot any mistakes. We're running Splunk 9.0.0. ## Go to /root/certs/ cd /root/certs/ ## Create new directory for the certs mdkir certs_2024 ## Create Server Key openssl genrsa -des3 -out splunk.key 2048 password123###### password123###### ## Create a No Pass Key openssl rsa -in splunk.key -out splunk.nopass.key enter passphrase - <<<password>>> ## Generate the csr file openssl req -new -sha256 -key splunk.nopass.key -out splunk.csr  once we get the certificate, we are running the below steps.  vi end_entity_cert <<paste the end_entity_cert value for the hostname and save>> vi intermediate_cert <<paste the intermediate_cert value for the hostname and save>> cp splunk.nopass.key /opt/splunk/etc/apps/App_hostname_ssl/local Go to certificates folder - cd /home/Splunk/certs_renewal/ Copy the rootCA.pem into /opt/splunk/etc/apps/App_hostname_ssl/local ## Create Certificate Chain cat end_entity_cert splunk.key intermediate_cert rootCA.pem >>full.pem ## Verify Certificate Validity openssl x509 -enddate -noout -in full.pem ./splunk restart  
Hi @scelikok , I modified the query as below and now its working fine for me. | eval Used_Space=case(match(Used_Space,"M"),round(tonumber(replace(Used_Space,"M",""))/1024,2), match(Used_Space,"G"),... See more...
Hi @scelikok , I modified the query as below and now its working fine for me. | eval Used_Space=case(match(Used_Space,"M"),round(tonumber(replace(Used_Space,"M",""))/1024,2), match(Used_Space,"G"),Used_Space)   Thank you for your inputs though..!!
You're right @ITWhisperer, I can't change the time from what was used in the base search which brings me to my second question. How can I add a drilldown to the same panel with a different timestamp?... See more...
You're right @ITWhisperer, I can't change the time from what was used in the base search which brings me to my second question. How can I add a drilldown to the same panel with a different timestamp? I want to expand the bar chart for a particular time to a drilldown containing more detailed information for that selected time frame.
@Sandivsu - Not sure if you can do that with props and transforms. But I'll provide a solution you can apply at the search query level. index=<your-index> ..... | rex field=_raw "\s\w+\[\w+\]:\s(?<j... See more...
@Sandivsu - Not sure if you can do that with props and transforms. But I'll provide a solution you can apply at the search query level. index=<your-index> ..... | rex field=_raw "\s\w+\[\w+\]:\s(?<json_content>\{.*\})" | spath input=json_content   I hope this helps!!! Kindly upvote if it does!!!
Hi @Mrig342 , It is a tested solution in my lab environment. Can you please check if the double quotes are the correct characters in your search? Sometimes they got replaced while copying from the b... See more...
Hi @Mrig342 , It is a tested solution in my lab environment. Can you please check if the double quotes are the correct characters in your search? Sometimes they got replaced while copying from the browser.  
Hi @Ismail_BSA, Splunk cannot convert/read these binary files. Maybe you can install SQLServer on Server C, import these audit files into that SQLServer, and query with DBConnect.
Hi @scelikok  Thank you for the query.. But its not working for me.. Its giving error: Error in 'EvalCommand': The expression is malformed. Expected).   Can you please help to modify the query.. ... See more...
Hi @scelikok  Thank you for the query.. But its not working for me.. Its giving error: Error in 'EvalCommand': The expression is malformed. Expected).   Can you please help to modify the query.. Thank you..!!
@bhall_2 - I didn't hear of it. Only Splunk Universal Forwarder (Splunk Agent on the host).
@avikc100  - You can add custom CSS to your simple XML dashboard to achieve this.   Dashboard XML Source Code   <form> <label>Fixed Column Sticky</label> <row depends="$tkn_never_show$"> ... See more...
@avikc100  - You can add custom CSS to your simple XML dashboard to achieve this.   Dashboard XML Source Code   <form> <label>Fixed Column Sticky</label> <row depends="$tkn_never_show$"> <panel> <html> <style> #myTable table td:nth-child(1) { position: fixed !important; } #myTable table th:nth-child(1) { position: fixed !important; } </style> </html> </panel> </row> <row> <panel> <table id="myTable"> <search> ....     If position: fixed doesn't work, you can try with position: sticky;   I hope this helps!! If it does kindly upvote!!!
I am browsing to look for a solution to this issue and eventually accidentally found a solution myself. Try if this will work for you.  search sourcetype=type1 field1='$arg1$' | rename field2 as... See more...
I am browsing to look for a solution to this issue and eventually accidentally found a solution myself. Try if this will work for you.  search sourcetype=type1 field1='$arg1$' | rename field2 as query | fields query | eval newField=query  Single quotes will return the value of the field in an eval expression.
Hi @asabatini, You can reorder or modify raw data using transforms,  you need to capture parts of the messages and reorder them like $1$3$2, etc. please see the document below; https://docs.splunk... See more...
Hi @asabatini, You can reorder or modify raw data using transforms,  you need to capture parts of the messages and reorder them like $1$3$2, etc. please see the document below; https://docs.splunk.com/Documentation/Splunk/9.0.3/Data/Anonymizedata#Configure_the_transforms.conf_file
@rickymckenzie10 - To simplify your understanding of warm and cold buckets and different parameters. (Only applicable when you are not using volumes)   Warm Buckets -> buckets in /db path Cold Bu... See more...
@rickymckenzie10 - To simplify your understanding of warm and cold buckets and different parameters. (Only applicable when you are not using volumes)   Warm Buckets -> buckets in /db path Cold Buckets -> buckets in /colddb path Frozen Buckets -> Deleted/Archived data   Warm to Cold Bucket Movement -> when maxWarmDBCount bucket count is reached.   Cold to Frozen (deleting, max age) Bucket Movement -> when all events are older than frozenTimePeriodInSecs   I hope this helps you understand the parameters better. Kindly upvote if it does!!!
Hi @richa, Since you asked for alerting data sources that stopped for more than 24 hours, it will not show yesterday's logs. You can change the delay parameter according to your needs.  86400 is e... See more...
Hi @richa, Since you asked for alerting data sources that stopped for more than 24 hours, it will not show yesterday's logs. You can change the delay parameter according to your needs.  86400 is equivalent to 24 hours in seconds. 
I have tried the option to set the tag. But the problem is by the time the 1st artifact set the tag, the 2nd one have already completed the decision block and hence it repeats the playbook
A quick win will be tagging the container. You can edit your playbook to check if the tag of the container is XYZ, which will be not in the first run (for the first artifact). Once you call your act... See more...
A quick win will be tagging the container. You can edit your playbook to check if the tag of the container is XYZ, which will be not in the first run (for the first artifact). Once you call your action to create an incident, change the tag of the container to XYZ, so even if the next artifact triggers the playbook, the tag will already by XYZ and the create incident action will not be called as it will not satisfy your condition. While creating artifacts manually (via rest, for example), you can force the parameter "run_automation" to false, preventing that new artifact to trigger the playbook execution, but in your case data is coming from the export app so maybe you can find some settings in there to change this behavior (honestly I don't recall one, but maybe you can find something interesting)
Your props is not matching the stanza name of transforms. Not sure if that was a typo... About a typo, you don't need that first pipe in the ingest_eval. Try this instead (I changed the regex a bit)... See more...
Your props is not matching the stanza name of transforms. Not sure if that was a typo... About a typo, you don't need that first pipe in the ingest_eval. Try this instead (I changed the regex a bit) Props.conf: [your_sourcetype] TRANSFORMS-set_time = set_time_from_file_path Transforms.conf [set_time_from_file_path] INGEST_EVAL = eval _time = strptime(replace(source, ".*/ute-(\\d{4}-\\d{2}-\\d{2}[a-z]+)/([^/]+/[^/]+).*","\\1"), "%Y-%m-%d_%H-%M-%S")
Hey Ricky, AFAIK maxWarmDBCount doesn't affect the rollover of data (but it can be storage hungry so be careful with that), it is something the frozenTimePeriodInSecs do instead. In your case, if I ... See more...
Hey Ricky, AFAIK maxWarmDBCount doesn't affect the rollover of data (but it can be storage hungry so be careful with that), it is something the frozenTimePeriodInSecs do instead. In your case, if I understood correctly, the frozen time already passed but your data did not rolled over, and that may be either because your cluster manager is too busy at the moment (and you are experiencing delay in this processing) OR maybe it is waiting for the buckets to hit a threshold in size. Check the bucket replication status also, it may indicate if there is any problem in there... Are you using maxTotalDataSizeMB key by any chance? Try to add that also to see if you get any diff behavior.