All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, we use the AWS AddOn for Splunk for all of our AWS Inputs. For a few months, after many years of this showing correctly, we no longer see data under this search: sourcetype=aws:config:notifi... See more...
Hello, we use the AWS AddOn for Splunk for all of our AWS Inputs. For a few months, after many years of this showing correctly, we no longer see data under this search: sourcetype=aws:config:notification configurationItem.resourceType = AWS::S3::Bucket The thing is, nothing changed other than updates to the AWS AddOn.  AND we still get data under here for every resourceType that I can think of EXCEPT S3, so the AWS side configuration and the inputs I have to assume are set fine. sourcetype=aws:config:notification   I have looked over this in every facet I can think possible and have had a support case open for a while now.  Any thoughts or similar cases?  Thanks!
It seems that there needs to be a search head captain running before new members can be added to a cluster? We're trying to automate the installation using ansible.  In ansible, we can specify which... See more...
It seems that there needs to be a search head captain running before new members can be added to a cluster? We're trying to automate the installation using ansible.  In ansible, we can specify which server to be the captain and its members.  However, if a server (designated to be a member) boots up, it won't be able to bring up the cluster because it isn't the captain? How do we overcome this?  We need to start up the server running the captain first before we bring up the rest of the members?
is there a way to add a drop down option to a dashboard that can change what csv lookup it uses?       
Hi there,  I have data like this: Server  load A 65 A 50 B 35 C 55 B 45 C 70   I want to get a maximum peak loads of each server type and display in a table forma... See more...
Hi there,  I have data like this: Server  load A 65 A 50 B 35 C 55 B 45 C 70   I want to get a maximum peak loads of each server type and display in a table format. So, Server Peak load A 65 B 45 C 70 I need help in how can I do that?
Hi, I have a dashboard with 3 panels: 1 - area chart using token1 and token2 2 - bar chart using token2 and token3 3 - panel ( table ) populates data based on drilldown on above charts Whenever ... See more...
Hi, I have a dashboard with 3 panels: 1 - area chart using token1 and token2 2 - bar chart using token2 and token3 3 - panel ( table ) populates data based on drilldown on above charts Whenever I drilldown on area chart, drilldown works fine. Whenever I drilldown on bar chart, drilldown works fine for bar chart. However, it also refreshes the area chart.  Is there a way to stop area chart from refreshing, when I drilldown on bar chart?   
Please help me find a list of pre-installed Apps & TAs that come with Splunk Enterprise & Splunk ES. Thank u in advance.
Hello Splunk Community!   I was hoping if someone can help me out here. I have been having problems adding a third field to an existing query that generates statistical data for SSL expiring in the... See more...
Hello Splunk Community!   I was hoping if someone can help me out here. I have been having problems adding a third field to an existing query that generates statistical data for SSL expiring in the next 90 days.  I am able to get the fields "name" and "expirationDate" to display but cannot add a field "subject" to the equation here. Current search query is:     index="test" sourcetype="test:test1:json" source="test.test2" | spath path=ssl output=fred | rex field=fred max_match=0 "\"name\":\"(?<name>.*?)\"}" | rex field=fred max_match=0 "\"expirationDate\":(?<expirationDate>.*?)," | dedup name | eval bob = mvzip(name,expirationDate) | mvexpand bob | rex field=bob "(?<name>.*),(?<expirationDate>.*)" | eval t=now() | where expirationDate >= t AND expirationDate <= (t + 7776000) | eval expiry=strftime(expirationDate, "%F %T.%3N") | table host name expiry     Expected output is:   host            name              expiry abc              test1              2021-07-09 10:10:10.000   I want to add a new field "subject" which I did the following but whenever "expirationDate" is added to the equation I am getting no results.    index="test" sourcetype="test:test1:json" source="test.test2" | spath path=ssl output=fred | rex field=fred max_match=0 "\"name\":\"(?<name>.*?)\"}" | rex field=fred max_match=0 "\"expirationDate\":(?<expirationDate>.*?)," | rex field=fred max_match=0 "\"subject\":(?<subject>.*?)," | dedup subject name | eval bob = mvzip(name,expirationDate,subject) | mvexpand bob | rex field=bob "(?<name>.*),(?<expirationDate>.*),(?<subject>.*)" | eval t=now() | where expirationDate >= t AND expirationDate <= (t + 7776000) | eval expiry=strftime(expirationDate, "%F %T.%3N") | table host name expiry subject     Grateful if you could point out my mistake here! I believe it's a wrong expression but cannot figure it out (it works fine without the conversion of the expiration date, but it comes under 1 row, which is also no ideal as I am hoping to separate entries into different rows.   Thanks, MJA
Hi all. I would like to know if there is a method to avoid displaying the boring messages, [INDEXER] Dispatch Runner: Configuration initialization for /opt/splunk/var/run/searchpeers/81F8C272-511... See more...
Hi all. I would like to know if there is a method to avoid displaying the boring messages, [INDEXER] Dispatch Runner: Configuration initialization for /opt/splunk/var/run/searchpeers/81F8C272-511D-4FBA-A2EA-F87129051744-1625835689 took longer than expected (3260ms) when dispatching a search (search ID: remote_SH_xxx__xxx__xxx__xxx_1625837471.443922_07316CBA-2692-4C41-B9C4-89500E9141D2); this typically reflects underlying storage performance issues   inside the SearchHead(s) Environment, FROM Indexer(s) Environment. System is up & running, heavy load, i know, and disks are very sollecitated. This message is really annoing, i know it's not a blocking-warning (storage is under "stress"), since there are many dashboards with some panels in which the warnings completely break the visual layout. I tried to search, but seems there's no "solution". Any idea? I CAN'T upgrade storage and disks! Thanks.
Hi, I'm hoping someone can help me out here. I have a property(books) on each event which holds an array of objects. I would like to group by books{}.name with count on y axis and create a bar cha... See more...
Hi, I'm hoping someone can help me out here. I have a property(books) on each event which holds an array of objects. I would like to group by books{}.name with count on y axis and create a bar chart. I tried using top books{}.name but this does not seem to give the correct results, seems to miss out on some groups all together   { books:[         {name: "book1"}, {name: "book2"}, {name: "book3"}, {name: "book3"}, {name: "book1"}, {name: "book1"},       ] } Would you have an idea of how to fix this, Kind Regards, Maurice
Hi everyone, I've an issue that I don't understand. As mentioned in subject, I'm realizing a dashboard with dropdown inputs where I'm using id and base attribute but my searches don't return any res... See more...
Hi everyone, I've an issue that I don't understand. As mentioned in subject, I'm realizing a dashboard with dropdown inputs where I'm using id and base attribute but my searches don't return any result. Notice that the last index time of my data is 06 july 2021. Here's the xml code of my dashboard. Thanks in advance for your kindly help!     <form theme="dark"> <label>Advanced Dashboard</label> <description>This is a dasboard build for learning purpose.</description> <search id="base_search"> <query>index=splunk_tutorial sourcetype="access_combined_wcookie" status=200 earliest=1</query> </search> <fieldset submitButton="false"> <input type="dropdown" token="date_year" searchWhenChanged="true"> <label>field1</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>date_year</fieldForLabel> <fieldForValue>date_year</fieldForValue> <search base="base_search"> <query>| stats count by date_year</query> </search> </input> <input type="dropdown" token="date_month" searchWhenChanged="true"> <label>field1</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>date_month</fieldForLabel> <fieldForValue>date_month</fieldForValue> <search base="base_search"> <query>| stats count by date_month</query> </search> </input> <input type="dropdown" token="date_mday" searchWhenChanged="true"> <label>Days of Month</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>date_mday</fieldForLabel> <fieldForValue>date_mday</fieldForValue> <search base="base_search"> <query>| stats count by date_mday</query> </search> </input> <input type="dropdown" token="date_wday" searchWhenChanged="true"> <label>Week days</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>date_wday</fieldForLabel> <fieldForValue>date_wday</fieldForValue> <search base="base_search"> <query>| stats count by date_wday</query> </search> </input> <input type="dropdown" token="date_hour" searchWhenChanged="true"> <label>Hours</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>date_hour</fieldForLabel> <fieldForValue>date_hour</fieldForValue> <search base="base_search"> <query>| stats count by date_hour</query> </search> </input> <input type="dropdown" token="country" searchWhenChanged="true"> <label>Countries</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>Country</fieldForLabel> <fieldForValue>Country</fieldForValue> <search base="base_search"> <query>| iplocation clientip | stats count by Country</query> </search> </input> </fieldset> </form>    
I have a splunk Cluster where instances are of following configurations. --> 16vCPU --> 64GB Memory --> 400GB Disk Size. The source ,  from where my app pulls data , 150k records are generate... See more...
I have a splunk Cluster where instances are of following configurations. --> 16vCPU --> 64GB Memory --> 400GB Disk Size. The source ,  from where my app pulls data , 150k records are generated each day. How do we confirm on the license part which needs to be installed for this scenario? Is there a straight away formula to calculate that? TIA.
When I am click on my data summary, it is not displaying anything just showing Any suggestions? Thanks.
I have a report where it counts the events for the "next weekend", this meaning the following: 1.“Weekend” is considered the interval between Friday - 4 pm and Monday - 8 am. 2.: “Next” is to be co... See more...
I have a report where it counts the events for the "next weekend", this meaning the following: 1.“Weekend” is considered the interval between Friday - 4 pm and Monday - 8 am. 2.: “Next” is to be considered with this rule: from Monday to Thursday, is the coming week and from Friday to Sunday, is the weekend after the next one. Example: before June, Friday 4th, the interval 4/6 - 7/6 is considered the next weekend, on Friday 4th the interval 4/6 - 7/6 becomes this weekend , 11/6 – 14/6 becomes next weekend. Issue: The report is scheduled at 8AM each day, but the Friday one is not considering the upcoming weekend as the "next weekend", rather taking the future weekend which will take place in 7 days. I need help in understanding how to change the logic of this report, to have it also on Friday filled with the next weekend results, meaning the first and second day right after. Hope it makes sense.   | eval start = strptime( 'Scheduled Start' ,"%Y-%m-%d %H:%M") | eval end = strptime( 'Scheduled End' ,"%Y-%m-%d %H:%M") | eval "Scheduled Start" = strftime(start , "%Y-%m-%d %H:%M") | eval "Scheduled End" = strftime(end , "%Y-%m-%d %H:%M") | eval nextFriday =if( strftime(now(),"%w")=="5" OR strftime(now(),"%w")=="6" OR strftime(now(),"%w")=="0",relative_time(now() , "+1w@w5+16h"), relative_time(now() , "@w5+7d+16h")) | eval nextMonday = relative_time(nextFriday , "+3d@d+8h") | eval nextMondayS = strftime(nextMonday , "%Y-%m-%d %H:%M") | eval nextFridayS1 = strftime(nextFriday , "%Y-%m-%d %H:%M") | where start >= nextFriday AND start <= nextMonday      
Hi guys, im noob in regular expressions!! 2021-07-05 23:22:12.807 +01:00 [WRN] XXXXX.Membership.Renew Long Running Request: IntegratePaymentCommand (1082 milliseconds) Jobs {"BatchSize":10,"MaxRet... See more...
Hi guys, im noob in regular expressions!! 2021-07-05 23:22:12.807 +01:00 [WRN] XXXXX.Membership.Renew Long Running Request: IntegratePaymentCommand (1082 milliseconds) Jobs {"BatchSize":10,"MaxRetry":5,"$type":"IntegratePaymentCommand"} What if I want to take [WRN] as event_level.. can be  [WRN] or [ERR]. And ( xxxxx miliseconds) as time.
Hello All, Hope you all are doing good!! I am trying to send some data to Splunk using UF. Below are my settings but I am getting data to Splunk without breaking the lines as I specified in my stan... See more...
Hello All, Hope you all are doing good!! I am trying to send some data to Splunk using UF. Below are my settings but I am getting data to Splunk without breaking the lines as I specified in my stanza. I want to break my events whenever there is messages tag. Kindly help me. I am just getting started my journey as admin but getting all issues. If possible please help with points using which we can trouble shoot all the issues  My Settings: inuts.conf: [monitor:///usr/narmada/props_test.log] index=narmada sourcetype=logs_format  outputs.conf: [tcpout:abc] server = 65.2.122.16:9997 props.conf: [logs_format] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]*)<messages> BREAK_ONLY_BEFORE=<messages> raw data: <?xml version="1.0" encoding="UTF-8" standalone="yes"?><logs schemaVersion="0"><messages><timestamp>2021-04-22T11:55:13.766-07:00</timestamp><level>PROGRESS</level><thread>backup4 ee5fa1cb0c31a3e56f4fed2c99ff7745</thread>location>com.netapp.common.flow.tasks.Log</location><msgKeyClass>com.netapp.smvi.SMMsgKey</msgKeyClass><msgKeyValue>PROGRESS_TASK_BACKUP_STARTING</msgKeyValue><parameters xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:nil="true"/><message>Starting backup request</message>/messages>messages<timestamp>2021-04-22T11:55:14.156-07:00</timestamp><level>INFO</level><thread>backup4 aaaaaaaaaajksbcjkbud7yh8y83eh38</thread><location>com.netapp.smvi.task.validation.BackupValidation</location><msgKeyClass>com.netapp.smvi.SMMsgKey</msgKeyClass><msgKeyValue>BACKUP_VALIDATION_INTERNAL_BACKUP_NAME_FOR_SCHEDULE_JOB</msgKeyValue><parameters><parameter>66fc1387-594c-48cb-b35d-94ca319a4a3c</parameter><parameter>backup_PM cDOT Datastore_20210422115514</parameter></parameters><message>Generating backupName for the scheduleJob 66fc1387-594c-48cb-b35d-94ca319a4a3c is backup_PM cDOT Datastore_20210422115514</message></messages>  
index=**** source_type=** cf_app_name=** api_call="*" | where like (api_call, "%xyz%") | table _time,response_code, duration,api_call | bin _time span=1d | appendpipe [ | chart count over api_call b... See more...
index=**** source_type=** cf_app_name=** api_call="*" | where like (api_call, "%xyz%") | table _time,response_code, duration,api_call | bin _time span=1d | appendpipe [ | chart count over api_call by response_code ] | stats sum(*) as *,count as Number_Of_Calls,perc95(duration) as perc95_duration,avg(duration) as avg_duration by api_call | eval perc95_duration=round(perc95_duration,3),avg_duration=round(avg_duration,3) | sort - _time | fields - duration,response_code | table api_call,_time,*,Number_Of_Calls     my _time column is always blank. Either _time or response codes are filled in.      
In our current Splunk infrastructure we have a number of UF's pushing data to a layer of Intermediate forwarders which parses OR filters the data and pushes it to a layer of Indexers. While trouble ... See more...
In our current Splunk infrastructure we have a number of UF's pushing data to a layer of Intermediate forwarders which parses OR filters the data and pushes it to a layer of Indexers. While trouble shooting any issue with a data which is already indexed is there any way to find out which Intermediate forwarder parsed this event, We are able to find the UF/data source from the host field and splunk_server field tells us the indexer where the data is stored or served from.
The o365:management:activity stanza within DA-ITSI-CP-m365/default/props.conf contains the following line: REPORT-nameval=NameValue   But there is no stanza called NameValue in DA-ITSI-CP-m365/def... See more...
The o365:management:activity stanza within DA-ITSI-CP-m365/default/props.conf contains the following line: REPORT-nameval=NameValue   But there is no stanza called NameValue in DA-ITSI-CP-m365/default/transforms.conf and, as a result, I'm seeing errors that look like the following in index=_internal: SearchOperator:kv [12345 TcpChannelThread] - Invalid key-value parser, ignoring it, transform_name='NameValue'. Is this an error with the add-on or am I doing something wrong?  A missing app/add-on perhaps?
Hi, coming for help again. i am trying to track smb traffic in me network but specifically smbv1 and v1.2 since they are both vulnerable. i tried a few things in splunk but cant seem to capture spe... See more...
Hi, coming for help again. i am trying to track smb traffic in me network but specifically smbv1 and v1.2 since they are both vulnerable. i tried a few things in splunk but cant seem to capture specific versions of smb. any help is great.
I'm trying to write to write a search to extract a couple of fields using rex. The text string to search is: "SG:G006 Consumer:CG-900004_T01 Topic:ingressTopic Session: bc77465b-55fb-46bf-8ca1-571d1... See more...
I'm trying to write to write a search to extract a couple of fields using rex. The text string to search is: "SG:G006 Consumer:CG-900004_T01 Topic:ingressTopic Session: bc77465b-55fb-46bf-8ca1-571d1ce6d5c5  LatestOffset:1916164 EarliestOffset:0 CurrentOffset:1916163 MessagesToConsume:2" I trying the following but nothing gets returned: index=... | rex "MessagesToConsume:(?P<MessagesToConsume>\d+) CurrentOffset:(?P<CurrentOffset>\d+)" | where MessagesToConsume>1 | table CurrentOffset MessagesToConsume CurrentOffset and MessagesToConsume are always empty, what am I doing wrong? Thanks!