All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, We want to enable ssl in our aws splunk Enterprise cluster on management port 8089 with own certs(provided by my company) I followed all the required steps from various documents and enabl... See more...
Hi All, We want to enable ssl in our aws splunk Enterprise cluster on management port 8089 with own certs(provided by my company) I followed all the required steps from various documents and enabled splunkd ssl in server.conf in all splunk components - cm, indexer cluster, sh cluster and deployer How can I verify that the ssl is correctly enabled and is using our own certs? I don't see any errors in any of splunkd logs. But I do not know how to prove that splunk instances are communicating with own certs. Also how is the secure communication happening without client certs? Don't we need both client and server certs in all splunk instances to securely communicate on port 8089(for that matter any port) Any help is highly appreciated
Hello, I try to figure out how to perform fields calculation based on rules coming from a lookup table. This is my use case : - I have event data coming in plain text format that are ingested... See more...
Hello, I try to figure out how to perform fields calculation based on rules coming from a lookup table. This is my use case : - I have event data coming in plain text format that are ingested into Splunk in "generic_single_line" format - I have configured props.conf to extract fields using regular expression - I have configured lookup table to enrich the event data (code -> label, etc..) Now, there's a field that needs to be populated from values extracted from the source and by applying rule defined in the lookup table. Is it possible ? Example, my lookup table looks like this : code, type, key_fields 001, E, field1 002, E, field1 + field2 003, R, field1 + field3 + field4 ...etc I need to somehow created an output new field called "unique_key" which is the value or the concatenated values defined in the lookup table based on the code value. Thanks in advance for your help.
Hello, I am recently joining with the Splunk community and really like your services but there is a small glitch which is not as important as it seems but for me it is really important to save val... See more...
Hello, I am recently joining with the Splunk community and really like your services but there is a small glitch which is not as important as it seems but for me it is really important to save valuable minutes. By the way, I am currently working on a website https://primeshinehcw.com.au which is best on dealing with car washing services. Whenever I want to submit my log-in details in Splunk, it takes two or three attempts to finally let me in. Yesterday it showing me errors like incorrect username or password then but when I checked my email id then I saw that I was submitted my correct information. But today It is working again. Why these glitches occurred from time to time? Please reply as soon as possible and let me know the solution to my issues. Thanks for reading.
Hi Team, I am getting an error message "Request failed: Session is not logged in." when trying to run rest apis from python script. Curl command works fine, it generates the session key as well i... See more...
Hi Team, I am getting an error message "Request failed: Session is not logged in." when trying to run rest apis from python script. Curl command works fine, it generates the session key as well it connects to the splunk results and fetches the results. I am using this in my script. import splunklib.client as client service = client.connect(host=host, port=8089,username=user, password=password) Am I missing anything here? I am
Hi , I want to enable SSL for Splunk web with 3rd party certificates provisioned by my company. We have server cert server key Intermediate certs Root CA cert all provisioned by my... See more...
Hi , I want to enable SSL for Splunk web with 3rd party certificates provisioned by my company. We have server cert server key Intermediate certs Root CA cert all provisioned by my company. After reading the documentation of https://docs.splunk.com/Documentation/Splunk/7.3.3/Admin/Webconf, my understanding is that the server cert for web ssl should be like [server cert] [All Intermediate certs] [Root CA cert] all in one file and provide it for serverCert config. But some of the posts below says that the serverCert cert should only have [ server cert] and not intermediates and root. I'm confused which one is correct. Please help. I'm using Splunk 7.3.3 btw. https://answers.splunk.com/answers/508313/how-to-use-rest-api-over-port-8089-with-the-splunk-1.html https://answers.splunk.com/answers/462626/how-do-i-secure-splunkweb-on-a-search-head-cluster.html
Hi Team, I would like to extract table name from below combined event using rex. Both events are combined in one event using transaction. Can you please help, 25324/-285213840 WRK:DF_E4CAC858_t... See more...
Hi Team, I would like to extract table name from below combined event using rex. Both events are combined in one event using transaction. Can you please help, 25324/-285213840 WRK:DF_E4CAC858_tor Thu Apr 9 23:17:25.077194 dbprq.c770 doQueryDiagnostics: The following SQL query took 535 seconds which is equal to or greater than QueryExecutionTimeThreshold (4 seconds) for User(AF) with DBProxyUser(AF). 25324/-285213840 WRK:AF_E4CAC858_tor Thu Apr 9 23:17:25.080304 dbpq.c782 SELECT * FROM PRODDTA.Employee WHERE ( A=1 ) Thanks, Abilan
I have an alert which checks the number of messages stuck in the queue with suppressing of 4 hours otherwise there will be number of alerts. Now I need to make it more dynamic means it should alert... See more...
I have an alert which checks the number of messages stuck in the queue with suppressing of 4 hours otherwise there will be number of alerts. Now I need to make it more dynamic means it should alert only if alert has not been sent for same result in last 4 hours. Can someone guide with this please?
Based on the documentation here https://docs.splunk.com/Documentation/MintAndroidSDK/5.2.x/DevGuide/UseProGuardwithSplunkMINT, we have to upload the mapping file every time, which is annoying as we h... See more...
Based on the documentation here https://docs.splunk.com/Documentation/MintAndroidSDK/5.2.x/DevGuide/UseProGuardwithSplunkMINT, we have to upload the mapping file every time, which is annoying as we have so many releases every week. Do we have any mechanism to fetch this file automatically from our release? Please suggest the best approaches for this. Thanks!
Hi Guru, Let's saying enterprise license switches into free, then I use splunk with free license for 1 month. (during that period, indexing doesn't stop. only searching is not possible.) 1 month l... See more...
Hi Guru, Let's saying enterprise license switches into free, then I use splunk with free license for 1 month. (during that period, indexing doesn't stop. only searching is not possible.) 1 month later, I apply enterprise license. Then, am I able to search for 1 month, free license period?
What is the proper way to fix the following error message that appears in splunkd.log from the Splunk_TA_paloalto minemeld_feed.py script? ERROR ExecProcessor - message from "/opt/splunk/bin/pyth... See more...
What is the proper way to fix the following error message that appears in splunkd.log from the Splunk_TA_paloalto minemeld_feed.py script? ERROR ExecProcessor - message from "/opt/splunk/bin/python2.7 /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/minemeld_feed.py" /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/requests/packages/urllib3/connectionpool.py:843: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings This question had an answer of putting in verify=false into the script which won't survive upgrades and is bad practice in general. The Minemeld server has a valid certificate on it, not a self-signed one but it appears that the CA is not trusted. How can I install the proper CA to complete the certificate validation?
Hi guys, In our Zoom.us admin webpage, it lists Top 10 locations by meeting participants and we see a number from China. We would not expect this to be the case and we are concerned about this. ... See more...
Hi guys, In our Zoom.us admin webpage, it lists Top 10 locations by meeting participants and we see a number from China. We would not expect this to be the case and we are concerned about this. I have been trying to use Splunk to determine who these participants are, or confirm that this is simply a case of poor geolocation, or confirm that all participants really were valid. Our Splunk instance is hitting up the Zoom API per https://answers.splunk.com/answers/812377/covid-19-response-is-splunk-able-to-ingest-logs-fr.html . Unfortunately the API does not seem to have geolocation or IP data except for our own user's sign-in and sign-out attempts. I have also been trying to leverage Splunk's Remote Work Insights (RWI) to identify these potentially rogue participants. Does anyone have ideas for how to use Splunk to solve this problem, or do any RWI experts have experience getting this data from the Zoom API?
I made a query that involves transposing a timechart (span=1w, analyzing since 1/1/2020). The result is the exact layout I want, however, several columns representing dates after the transpose ar... See more...
I made a query that involves transposing a timechart (span=1w, analyzing since 1/1/2020). The result is the exact layout I want, however, several columns representing dates after the transpose are missing (ie. nothing in February showed up). Is there a limit in splunk how many columns are transposed? Query: splunk_server=indexer* index=wsi_tax_summary sourcetype=stash intuit_tid=* intuit_offeringid=* capability=* error_msg_service=* http_status_code_host=* | timechart span=1w dc(intuit_tid) as total_requests, dc(eval(if(error_msg_service="OK", intuit_tid,null))) as total_success | sort -_time | eval _time=strftime(_time,"%m/%d/%y") | eval total_failures=total_requests-total_success | eval success_rate= ROUND((total_success/total_requests)*100,2) | transpose header_field=_time column_name=week_starting | regex week_starting!=("^_") | eval sortkey=case(week_starting="total_requests",1, week_starting="total_success", 2, week_starting="total_failures", 3, week_starting="success_rate", 4) | sort sortkey | fields - sortkey
Hi - I'm struggling with a problem occurring in a drilldown search used in a dashboard panel. On Splunk 7.21, the drilldown works fine; Splunk 8 gives the following error: Invalid earliest time... See more...
Hi - I'm struggling with a problem occurring in a drilldown search used in a dashboard panel. On Splunk 7.21, the drilldown works fine; Splunk 8 gives the following error: Invalid earliest time. I narrowed down the issue to an eval statement in the drilldown - |eval k=mvfilter(match(t, ",1$")) - to match a field that ends with ,1. the issue seems to be with the $. I've tried replacing the $ with %24, %2524, replacing double quotes with single quotes and protecting the $ with a backslash (out of desperation). This all fails - well, the %2524 works once, then will fail with the "Invalid earliest time" error on subsequent executions. When i check the drilldown, splunk has translated %2524 to %$. Does anyone have any guidance/help to offer? Thank you!
Hi, I am trying to merge below row "EUR%20" count to "EUR" . Please help. String: sourcetype=access_combined index="web_technology" host="*" clientip="*" REGION_CD!=ALL REGION_CD!=ALL%20 | ... See more...
Hi, I am trying to merge below row "EUR%20" count to "EUR" . Please help. String: sourcetype=access_combined index="web_technology" host="*" clientip="*" REGION_CD!=ALL REGION_CD!=ALL%20 | stats count by REGION_CD
How can I perform a search to get a count of how many times each alert has fired over a period of time?
Hello All, We are having some storage capacity issues and trying some different things to make some space for the ingestion. So we did TSIDX reduction on one index as a test to see how much percent... See more...
Hello All, We are having some storage capacity issues and trying some different things to make some space for the ingestion. So we did TSIDX reduction on one index as a test to see how much percentage of space we get if we do that and now we would like to implement retention on those reduced buckets. will this be a regular process of retention or will there be any problem in doing retention on the reduced buckets. Please let me know. Thanks.
Does this App support bitbucket 2.0 API update?
Since recently completing the upgrade of our search head to 8.0.0, a schedule search that emails and attached csv is now adding an extra line between every event. Some of our scheduled csv's are use... See more...
Since recently completing the upgrade of our search head to 8.0.0, a schedule search that emails and attached csv is now adding an extra line between every event. Some of our scheduled csv's are used by other automation, so the extra lines are breaking something. Is this a known issue that was fixed in 8.0.1? I see in the 8.0.1 "Fixed Issues" this: SPL-176009, SPL-166728 - Alert email spacing issue, but I don't know how to look up what the specific issues are to know if that relates to my problem or not.
Here's what I got so far: index="myindex" (host="192.168.0.100" OR host="192.168.0.101") (msg="login OK" OR msg="login FAILED") | transaction user maxspan=30s endswith="login OK" | eval FailedLo... See more...
Here's what I got so far: index="myindex" (host="192.168.0.100" OR host="192.168.0.101") (msg="login OK" OR msg="login FAILED") | transaction user maxspan=30s endswith="login OK" | eval FailedLogons=eventcount-1 | where msg="login FAILED" AND FailedLogons >= 3 | table _time user FailedLogons For example, a user's account quickly fails to logon 3 times, then successfully logs on. I consider this strange activity and want to track it. This query gets me mostly there; however, assumes all events before "login OK" are failures (which may not always be the case). Transaction combines msg to where there's only one value of each; however, _raw contains the combined fields. Is there a way to look for 3 consecutive failures that end with success?
Hello Experts, Greetings! We have installed 4.4.x version of AppD machine agent on one server but we didn't receive any data from agent to controller. So we upgraded the version from 4.4.x to 2... See more...
Hello Experts, Greetings! We have installed 4.4.x version of AppD machine agent on one server but we didn't receive any data from agent to controller. So we upgraded the version from 4.4.x to 20.2.x version. Initially, it collected the data for CPU, Memory but not Disk space.  We found the below error in logs: extension-scheduler-pool-4] 07 Apr 2020 16:14:46,606  WARN RawCollectorUtil - Collector process did not finish in 27000 milliseconds So we have increased the "sampling interval" value from 30000ms to 60000ms but still, there was no data for Diskspace. After some time, suddenly we didn't receive any metrics (CPU, memory, and Disk). We found the below error: [extension-scheduler-pool-0] 09 Apr 2020 14:29:13,441  WARN RawCollectorUtil - Collector process did not finish in 54000 milliseconds We configured machine agent 20.2.x on Windows Server 2016 Standard, 64bit, 256GB RAM. Any idea why we are havaing problem to collect the data? Thanks, Selvaganesh E