All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We have transitioned from ServiceNow calendaring for on-call to Splunk On-Call. Our users are used to getting an email the week before they go on-call and then again as they're entering their on-call... See more...
We have transitioned from ServiceNow calendaring for on-call to Splunk On-Call. Our users are used to getting an email the week before they go on-call and then again as they're entering their on-call week. Is there a way to notify people when it's their on-call week?
Hi all, I have a table and I need to highlight the values that are greater than lets say 5 in a line graph. how to select only those specific values into search  
Hi, I am running below query and expecting count of failureCount, warningCount in table as total count (1 row only), however it's not returning anything, Where I am going wrong?   index="deng03-c... See more...
Hi, I am running below query and expecting count of failureCount, warningCount in table as total count (1 row only), however it's not returning anything, Where I am going wrong?   index="deng03-cis-dev-audit" | spath PATH=data.labels.verbose_message output=verbose_message | search "data.labels.activity_type_name"="ViolationOpenEventv1" | where (verbose_message like "%Oldest unacked message age%evt%" or verbose_message like "%Oldest unacked message age%rec%") | eval error=case(like(verbose_message,"%above the threshold of 1800.000%"), "warning", like(verbose_message,"%above the threshold of 300.000%"), "failure") | eval failureCount count by error="failure" | eval warningCount count by error="warning" | table failureCount, warningCount
Hello everyone,  I am trying to separate data getting into the main index from particular hosts. I am trying  Transforms.conf [windows_ot] DEST_KEY = _metadata_index Regex = HOst123 Format = h... See more...
Hello everyone,  I am trying to separate data getting into the main index from particular hosts. I am trying  Transforms.conf [windows_ot] DEST_KEY = _metadata_index Regex = HOst123 Format = host_wineventlog Source_key = Metadata:Host Props.conf [wineventlog] Transforms-setindex = windows_ot Do we have another way to separate 26 host's data ( without accessing the host machines )  The following data flow level 2 (Heavyforwarder) sending data to ---> level 3 (heavy forwarder) ---> main HF Is it a way to separate data in the main HF ?? also is trans conf correct ?? 
Hello, I am looking for a help here, this is a very weird issue that I am facing. I have a requirement to monitor Event ID 4624 and 4625 from a specific set (10) of servers. I have used following i... See more...
Hello, I am looking for a help here, this is a very weird issue that I am facing. I have a requirement to monitor Event ID 4624 and 4625 from a specific set (10) of servers. I have used following inputs.conf, but instead of receiving these specific events data, i am receiving some other event codes such 4670, 4719, 4742, 4738 etc. I have tried almost all possible ways, but unable to understand what's really happening here. [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 # only index events with these event IDs. whitelist = 4624, 4625 index = wineventlog sourcetype = xyz renderXml=false
folder1 we have multiple file f1,f2,f3,f4  need to configure all files for different sourcetype below is the query which we have created but did not worked [batch://<path_of the file>] index=i1 ... See more...
folder1 we have multiple file f1,f2,f3,f4  need to configure all files for different sourcetype below is the query which we have created but did not worked [batch://<path_of the file>] index=i1 sourcetype=s1 whitelist = f1 move_policy=sinkhole   [batch://<<path of the file>>] index=i1 sourcetype=s2 whitelist = f2 move_policy=sinkhole
Hi all, I have a line chart with few fields and a threshold field. I wanted to highlight the data points that are above the threshold line is it possible to do so in dashboard studio? 
Hi,   I receive data from a particular product that is installed on various customers, that data is received every 5 minutes, from the jason there is a field named tname and what i am interested ... See more...
Hi,   I receive data from a particular product that is installed on various customers, that data is received every 5 minutes, from the jason there is a field named tname and what i am interested in is for every customer (lets say custoerName is the field) check what tname's we received in the last five minutes and compare to the .csv lookup file. I am only interested to show what is present from the returned data that is not present in the .csv.   I hope the above helps?   Thanks
We are using java instrumentation for our applications running on the Kubernetes cluster. The applications are running with a normal user "appuser" , where appd agent is showing an error. MultiT... See more...
We are using java instrumentation for our applications running on the Kubernetes cluster. The applications are running with a normal user "appuser" , where appd agent is showing an error. MultiTenantAgent Dynamic Service error - could not open Dynamic Service Log /opt/appdynamics-java/ver22.4.0.33722/logs/fixeddeposit-service-cb865796d-jp29w/argentoDynamicService_05-09-2022-05.10.44.log      Running as user appuser      Cannot write to parent folder /opt/appdynamics-java/ver22.4.0.33722/logs/fixeddeposit-service-cb865796d-jp29w      Could NOT get owner for MultiTenantAgent Dynamic Services Folder      Likely due to fact that owner (null) is not same user as the runtime user (appuser)      which means you will need to give group write access using this command:  find external-services/argentoDynamicService  -type d -exec chmod g+w {}      Possibly due to lack of permissions or file access to folder: Exists: false, CanRead: false, CanWrite: false      Possibly due to lack of permissions or file access to log: Exists: false, CanRead: false, CanWrite: false      Possibly due to java.security.Manager set - null      Possibly due to missed agent-runtime-dir in Controller-XML and will need the property set to correct this...      Call Stack: java.io.FileNotFoundException: /opt/appdynamics-java/ver22.4.0.33722/logs/fixeddeposit-service-cb865796d-jp29w/argentoDynamicService_05-09-2022-05.10.44.log (No such file or directory) can anyone help me here?
Hello all, I need to ingest audit logs from Atlassian Jira, Confluence, and Bitbucket. Most of that is pretty straight forward, but I am finding the Jira audit logs (atlassian-jira.log and atlassian... See more...
Hello all, I need to ingest audit logs from Atlassian Jira, Confluence, and Bitbucket. Most of that is pretty straight forward, but I am finding the Jira audit logs (atlassian-jira.log and atlassian-servicedesk.log) seem very random (sometimes fields are missing, occasionally a field will be long with multiple words and spaces with no fields or brackets). Are there any TAs that can assist with parsing for on-prem Jira audit logs (atlassian-jira.log and atlassian-servicedesk.log)? Or any Splunk guidance I can give the Jira admin to help make logging better. So far they have told me they don't want to update their log4j properties files because it will just get overridden the next time they upgrade. 
hello , i want to detect foreign ip at first step, then search in traffic for connections between foreign ip and other local ips.   | tstats `security_content_summariesonly` values(All_Traffi... See more...
hello , i want to detect foreign ip at first step, then search in traffic for connections between foreign ip and other local ips.   | tstats `security_content_summariesonly` values(All_Traffic.src_ip) AS src values(All_Traffic.dest_ip) AS dest values(All_Traffic.dest_ip_Country) AS dest_country values(All_Traffic.src_ip_Country) AS src_country from datamodel=Network_Traffic by _time | eval attacker=if(src_country="","$src$","$dest$") | search [ | tstats count from datamodel=Network_Traffic WHERE (All_Traffic.src_ip=attacker OR All_Traffic.dest_ip=attacker) by _time ]  
I have been fighting with a regex in my props.conf (Regex-working-on-search-but-not-props-transforms ) and after a lot of testing, I came to the conclusion that my regex is fine. And my props.conf is... See more...
I have been fighting with a regex in my props.conf (Regex-working-on-search-but-not-props-transforms ) and after a lot of testing, I came to the conclusion that my regex is fine. And my props.conf is fine. I had originally failed to see SOME were processing fine, and assumed all were bad. So, I checked to see if there was a cutoff where some were good vs bad, and found one, inside my len(_raw) field. Turns out, my props.conf seems to stop working somewhere after 4096 character count.  So, I made some test data and pulled it into splunk Bash:   #!/bin/bash x="a" count=1 while [ $count -le 5000 ] do echo "$x regex" ((count+=1)) x+="a" done   This just makes a bunch of lines, incrementing with another char each time.   a regex aa regex aaa regex aaaa regex aaaaa regex aaaaaa regex aaaaaaa regex   Then I pull this into Splunk, 1 line per log. Static host field = "x" Props:   [host::x] SHOULD_LINEMERGE = false TRANSFORMS-again = regexExtract   Transforms:   [regexExtract] REGEX = .*(regex) DEST_KEY= _raw FORMAT = $1 WRITE_META = true   My search:      index="delete" sourcetype="output.log" | eval length=len(_raw) | search length<4100 | table length | sort - length     Assuming my KV limits are ok, because this btool snippit:     /opt/splunk/etc/system/default/limits.conf [kv] /opt/splunk/etc/system/default/limits.conf avg_extractor_time = 500 /opt/splunk/etc/system/default/limits.conf indexed_kv_limit = 200 /opt/splunk/etc/system/default/limits.conf limit = 100 /opt/splunk/etc/system/default/limits.conf max_extractor_time = 1000 /opt/splunk/etc/system/default/limits.conf max_mem_usage_mb = 200 /opt/splunk/etc/system/default/limits.conf maxchars = 10240 /opt/splunk/etc/system/default/limits.conf maxcols = 512     ------------------------------------------ Now that we have all the setup, I search my index and see that every field up thought 4097 chars long will process the index time regex. After that, the search time regex works fine, but index time is no longer functional.   How do i get it to continue processing beyond that ~4100 chars?       
Hi,   In the above figure, I save the test results using a save ID and then I get a list of ID's like the one below. But when I click on 'save new test', it doesn't work. I have the... See more...
Hi,   In the above figure, I save the test results using a save ID and then I get a list of ID's like the one below. But when I click on 'save new test', it doesn't work. I have the same code on the test site as well where it works as usual. A sample xml code is provided as well. <panel depends="$host_token$"> <title>Save Test</title> <!--When button is pressed the the token execute_save_test is set up with a outputlookup at the end, this is used later on to push the data back into the lookup in order--> <input type="text" token="save_test_name_token" searchWhenChanged="true"> <label>Test_Name</label> </input> <input type="text" token="save_test_comments" searchWhenChanged="true"> <label>Comments</label> <default>-</default> </input> <input type="dropdown" token="save_test_status" searchWhenChanged="true"> <label>Mark test as gold, bad test or ordinary</label> <choice value="ORDINARY">ORDINARY</choice> <choice value="GOLD">GOLD</choice> <choice value="BAD TEST">BAD TEST</choice> <default>ORDINARY</default> </input> <html> <button class="btn" data-token-json="{&quot;execute_save_test&quot;:&quot;| eventstats max(ID) as max_ID | eval ID = if(isnull(ID),max_ID + 1,ID) | fields - max_ID | dedup ID | outputlookup Saved_Tests.csv&quot;}">Save new test</button> </html> <html depends="$saved_test_id$"> <button class="btn" data-token-json="{&quot;execute_save_test&quot;:&quot;| eval ID = if(isnull(ID),$saved_test_id$,ID) | dedup ID | outputlookup Saved_Tests.csv&quot;}">Update current test (ID: $saved_test_id$)</button> </html> </panel>   Could someone please advise what has gone wrong?   Regards, Pravin    
Hello all, I have a set of data as below. In the column is value of each id according to the time _time id = 12345 id = 12347 id = 12349 01-févr 10 20 5 02-févr 12 45 9 03-févr... See more...
Hello all, I have a set of data as below. In the column is value of each id according to the time _time id = 12345 id = 12347 id = 12349 01-févr 10 20 5 02-févr 12 45 9 03-févr 15 53 12 04-févr 17     05-févr       06-févr   120   07-févr   140 56 08-févr 57 150 60 09-févr 60 155 75 10-févr 70 175 90   I would like to  calculate delta then fill the null delta. I have this piece of  code, until here I can calculate the delta for each id, I am finding the solution for the filling null delta: index="index" [|inputlookup test.csv | search id=1234** |timechart latest(value) as valLast span=1d by id |untable _time id valLast |streamstats current=false window=1 global=false first(valLast) as p_valLast by id | eval delta=valLast-p_valLast | xyseries _time id delta |streamstats count(eval(if(isnull(delta),0,null()))) as count by id Result: columns display delta values according to each id in a time _time id = 1 id = 2 id = 3 01-févr       02-févr 2 25 4 03-févr 3 8 3 04-févr 2     05-févr       06-févr   120   07-févr   20 56 08-févr 57 10 4 09-févr 3 5 15 10-févr 10 20 15   Thanks in advanced!
Hi there,  This is in regards to this Splunk Add-On:   "Splunk Add-on for Microsoft Cloud Services" https://splunkbase.splunk.com/app/3110/#/overview   I would like to find out what is the A... See more...
Hi there,  This is in regards to this Splunk Add-On:   "Splunk Add-on for Microsoft Cloud Services" https://splunkbase.splunk.com/app/3110/#/overview   I would like to find out what is the API Permission Required in Azure Active Directory for this Add-On? I have been searching for it but wasn't available except "Azure Insight" but the link does provided does not help either. In addition, "Azure Insight" wasn't part of the API Permission in Azure Active Directory.     
Hi Team, what is the volume of events received in bytes per day  from CyberArk EPM SaaS to SPLUNK?  
Hi,   I have a very basic timechart from the below search. Just counts the number of events=40 (event ID). The issue is we had a logging problem and received no events for a specific time period be... See more...
Hi,   I have a very basic timechart from the below search. Just counts the number of events=40 (event ID). The issue is we had a logging problem and received no events for a specific time period before we resolved the issue. This means the timechart has a drop to zero then back up to usual levels. Can I remove this from the timechart somehow?     Index=main event_type=40 | timechart count(src_ip) by sensor    
Hi All,  I'm currently trying to configure a alert to trigger when 2 events are NOT present in last 15min.  In short if we have only Event1 but not Event2 then a alert should be triggered, if both... See more...
Hi All,  I'm currently trying to configure a alert to trigger when 2 events are NOT present in last 15min.  In short if we have only Event1 but not Event2 then a alert should be triggered, if both events are present in last 15min then no alerts should be triggered.  Use case, the alert is being configured to alert us when a VPN tunnel interface goes down and stays down for more than 15min, generally these VPN connections to terminate briefly but comes back up after a few seconds, hence we would like only alert if Event1 (down) took place in last 15min without Event2 (up) taking place.  Event1 - Search query index=firewall 10.10.10.10 Firewall_Name_XYZ=TEST123 AND "Lost Service" Event2 - Search query  index=firewall 10.10.10.10 Firewall_Name_XYZ=TEST123 AND (inbound "LAN-to-LAN" "created") Search Query to show both events  index=firewall 10.10.10.10 Firewall_Name_XYZ=TEST123 AND ("Lost Service" OR (inbound "LAN-to-LAN" "created")) Any assistance will be greatly appreciated  
Hello Splunk Community! I have a query where im extracting data from different logs and displaying them on the same row in a statistical table. here is my code:   index=main source=/opt/server/*/... See more...
Hello Splunk Community! I have a query where im extracting data from different logs and displaying them on the same row in a statistical table. here is my code:   index=main source=/opt/server/*/*userauth* | rex field=_raw "\]\s\S+\s-\s\[(?<caller>\S+)\]\s\S+->(?<function>\S+)\s(?<logs>.+)\:\s\[(?<value>\S+)\]" | where isnotnull(caller) | chart values(value) over caller by logs useother=f   i used the chart command to turn the logs  into column headers with their respective values The result : caller logB logC logD caller_id valueB valueC valueD however, when i narrow down the search by adding "validateSB" which is contained in most log entries,   index=main source=/opt/server/*/*userauth* validateSB | rex field=_raw "\]\s\S+\s-\s\[(?<caller>\S+)\]\s\S+->(?<function>\S+)\s(?<logs>.+)\:\s\[(?<value>\S+)\]" | where isnotnull(caller) | chart values(value) over caller by logs useother=f    The column A now appears as a column header with its respective value. caller logA logB logC logD caller_id valueA valueB valueC valueD When i also search a specific caller, the column A appears with its respective value too. Anybody know why this might be the case? Thanks in advance!
Hello, We're trying to connect Deep Learning Toolkit to an on prem Kubernetes cluster, but it looks like it's failing on the initial connection.  We're using User Login so the first thing I need he... See more...
Hello, We're trying to connect Deep Learning Toolkit to an on prem Kubernetes cluster, but it looks like it's failing on the initial connection.  We're using User Login so the first thing I need help with is what certificate we're supposed to use. We've tried the one presented by the Kubernetes instance when we run "openssl s_client -connect server.com:6443" and one provided but the Kubernetes admin, but we still get the same error message: Exception: Could not connect to Kubernetes. HTTPSConnectionPool(host='server.com', port=6443): Max retries exceeded with url: /version/ (Caused by SSLError(SSLError(0, 'unknown error (_ssl.c:4183)'))) Nothing is getting blocked in the firewall.