All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Field names after timechart with groupby are not commSent, but the values of the groupby field, i.e., values of ID. (You can examine Statistics tab to confirm this.)  You need to enumerate these valu... See more...
Field names after timechart with groupby are not commSent, but the values of the groupby field, i.e., values of ID. (You can examine Statistics tab to confirm this.)  You need to enumerate these values.  Say, you have five values ID1, ID2, ID3, ID4, ID5, you do index=indexhc source=hcdriver sourcetype="assembly" appname="marketing" ID IN (abc,xyz,qtr,jyk,klo,mno,ghr) | timechart span=1d count as commSent by ID | predict ID1 as predicted_ID1 ID2 as predicted_ID2 ID3 as predicted_ID3 ID4 as predicted_ID4 ID5 as predicted_ID5 algorithm=LLP holdback=0 future_timespan=24 (Then you will need to figure out what to do with these 10 additional series.)  Hope this helps.
So you are saying sometimes you get the email and occasionally you do not get it. Can you see examples of the sendemail in the internal logs for a successful email alert? Do you have access to the ... See more...
So you are saying sometimes you get the email and occasionally you do not get it. Can you see examples of the sendemail in the internal logs for a successful email alert? Do you have access to the _internal index?
Validate it through the below spl query   index=_internal | head 1 | sendemail to="name@my.email.domain" format="html" server=smtp.gmail.com:587 use_tls=1  
Have you Configured the smtp in the search head?  Settings -> Server settings -> Email settings
You can do that by clicking the Assets and Identity lookups and follow the hyperlink under the source tab. That will redirect it to the contents of the lookup where you can click on the field and edi... See more...
You can do that by clicking the Assets and Identity lookups and follow the hyperlink under the source tab. That will redirect it to the contents of the lookup where you can click on the field and edit it.
When i checked with index =_internal sendemail I don't see any logs The email which we used to trigger alert is fine because every day alert triggers and we receive email this issue is happening sud... See more...
When i checked with index =_internal sendemail I don't see any logs The email which we used to trigger alert is fine because every day alert triggers and we receive email this issue is happening suddenly like once in a week we are not receiving email
If your alert has fired and has sent the email and it was not received, then look for any events in _internal index=_internal sendemail Is your Splunk server able to talk to the SMTP host it is try... See more...
If your alert has fired and has sent the email and it was not received, then look for any events in _internal index=_internal sendemail Is your Splunk server able to talk to the SMTP host it is trying to send email to - have you configured that server?  
Thanks!
We have setup one alert which should trigger for every 1 hour When we run the alert query it is showing up the results but we did not received mail There is no diff in index and event time In s... See more...
We have setup one alert which should trigger for every 1 hour When we run the alert query it is showing up the results but we did not received mail There is no diff in index and event time In scheduler logs it is showing status as success but i don't see python logs and alert did not get fired   What could be the issue for not receiving the mail from alert.
I am assuming that you want to get 200, 400 and 500 (not a second 400) response codes. You can combine the response code and method and then chart by that field, e.g. see this run anywhere example b... See more...
I am assuming that you want to get 200, 400 and 500 (not a second 400) response codes. You can combine the response code and method and then chart by that field, e.g. see this run anywhere example but it is the last two lines you want. | makeresults count=40 | eval responseCode=mvindex(split("200,400,500", ","), random() % 3) | eval method=mvindex(split("GET,POST,PATCH", ","), random() % 3) | eval app="APP".(random() % 5) ``` Use these two lines to get the chart you want ``` | eval s=responseCode."_".method | chart count over app by s  It will not give you a multiline header as in your image, but that's not really how Splunk does things in tables.
It looks like you posted the same image twice, but I am assuming that in the INFO message was the first one at 11:15:54:355 and the error was 1 millisecond earlier at 11:15:54:354 and you want to ext... See more...
It looks like you posted the same image twice, but I am assuming that in the INFO message was the first one at 11:15:54:355 and the error was 1 millisecond earlier at 11:15:54:354 and you want to extract the ID 0021d100-46c2-11ee-9327-12b7e80d647b and then count those IDs which have only INFO and those that have both and error.  Or it might be that you just want to count errors vs info so you could do    | eval isInfo=if(severity="INFO", 1, 0) | eval isError=if(severity="ERROR", 1, 0) | stats sum(isInfo) as Transactions sum(isError) as Errors   which would just count the INFO and ERROR events, or you could do this   | rex field=message "(?<tx_id>\w{8}-\w{4}-\w{4}-\w{4}-\w{12})" | stats count by tx_id | where count=2   which would give you all the transactions that ended in error, but it depends exactly what your output requirement and also whether you have more than one possible INFO/ERROR event in the dataset.    
Your subsearch is in the wrong place - it should be a constraint to the outer search, whereas now it is attached to your lookup statement on your second line, hence the error. There are a couple of ... See more...
Your subsearch is in the wrong place - it should be a constraint to the outer search, whereas now it is attached to your lookup statement on your second line, hence the error. There are a couple of ways to solve this 1. Make the lookup an automatic lookup. That means the outer search will already have the autonomous_system value from the event's src_ip. In that case you can do the search like this index=firewall src_ip=* [ | makereults | eval src_ip=1.1.1.1 | lookup asn ip as src_ip | fields autonomous_system ] | stats values(src_ip) by autonomous_system There is no point in searching the index in the subsearch just to construct a lookup for an IP address, just use makeresults to perform the lookup. 2. If you do not already have the autonomous_subysystem in your data you can't use a subsearch to constrain it, so you will have to do the lookup twice, the first time to get the subsystem for the event and the second to get the subsystem of the wanted match IP (1.1.1.1), so the search is index=firewall src_ip=* | lookup asn ip as src_ip | eval match_src_ip=1.1.1.1 | lookup asn ip as match_src_ip OUTPUT autonomous_system as wanted_autonomous_system | where autonomous_system=wanted_autonomous_system | stats values(src_ip) by autonomous_system Hope this helps
You can see from my earlier post that this appears to be an issue that is still unresolved, so you will need to address it another way, as referenced.
Ah... you mention base search. I have seen an issue with post processing searches showing Waiting for data when a search re-runs but only the post process part has a changed criteria and the base do... See more...
Ah... you mention base search. I have seen an issue with post processing searches showing Waiting for data when a search re-runs but only the post process part has a changed criteria and the base does not - there is some additional token complexity in there too. Is this relevant to your case? The behaviour is similar in that opening the search in a new window works. I currently have a workaround where I ensure the post process search forces use of a field that is not actually required, but it makes the search run. I haven't tracked this down, but suspect it's a bug as I am generally admin when I see this.
Dear Support, I have 2 indexes (indexA,  indexB) and one receiving server with 2 different ports (10.10.10.10:xx, 10.10.10.10:yy). I need my indexer to forward indexA to 10.10.10.10:xx and indexB t... See more...
Dear Support, I have 2 indexes (indexA,  indexB) and one receiving server with 2 different ports (10.10.10.10:xx, 10.10.10.10:yy). I need my indexer to forward indexA to 10.10.10.10:xx and indexB to 10.10.10.10:yy. What is best way to achieve it? I did two different apps with outputs, props, transforms and it does not work. I tried one app with LB and it does not work either. Example of outputs.conf: [tcpout] defaultGroup = group1, group2 [tcpout:group1] server = 10.10.10.10:xx forwardedindex. = ??? [tcpout:group2] server = 10.10.10.10:yy forwardedindex. = ???   Is it a good way to do it? How should forwardedindexes config look like ? What about props and transforms?   I would appreciate any help.   thanks pawel
Hello I want to find in subsearch autonomous_system for the IP address which I provided (in this example for 1.1.1.1) . Next, based on the name of the autonomous_system returned from subsearch, I... See more...
Hello I want to find in subsearch autonomous_system for the IP address which I provided (in this example for 1.1.1.1) . Next, based on the name of the autonomous_system returned from subsearch, I want to find all IP addresses connecting to my network that belongs to that autonomous_system.  For now I have something like that: index=firewall src_ip=* | lookup asn ip as src_ip [search index=firewall  src_ip=1.1.1.1 | fields src_ip | lookup asn ip as src_ip | rename autonomous_system AS subsearch_autonomous_system | dedup subsearch_autonomous_system] | stats values(src_ip) by subsearch_autonomous_system But when I run this search I got error: Error in 'lookup' command: Cannot find the source field '(' in the lookup table 'asn'. Can anyone help me with that? Regards Daniel
I have use case to use the ML feature to detect  the  anamoly in comm sent from each ID. I was trying to get the same from predict function, but there is multiple ID's and I can't set an alert/repor... See more...
I have use case to use the ML feature to detect  the  anamoly in comm sent from each ID. I was trying to get the same from predict function, but there is multiple ID's and I can't set an alert/report individually for all ID's. How I can use the same, Please help. Query which I am trying: index=indexhc source=hcdriver sourcetype="assembly" appname="marketing" ID IN (abc,xyz,qtr,jyk,klo,mno,ghr)  | timechart span=1d count as commSent by ID | predict commSent as predicted_commSent algorithm=LLP holdback=0 future_timespan=24 | eval anamoly_score=if(isnull(predicted_commSent),0,abs(commSent - predicted_commSent)) |table _time,ID,commSent,predicted_commSent,anamoly_score Above query is not giving any output,it seems predict command doesnot work with multiple columns. Please suggest.
The sample INFO event does not contain a "Received Payload" text. What field(s) link the ERROR event to an INFO event?
Hi. I've tried to get Splunk to understand syslog messages coming from a Cisco Mobility Express setup. Mobility Express (ME) is the built-in controller solution into, in this setup, 3 AP3802I acces... See more...
Hi. I've tried to get Splunk to understand syslog messages coming from a Cisco Mobility Express setup. Mobility Express (ME) is the built-in controller solution into, in this setup, 3 AP3802I access points running 8.10.171.0 I have been successful at getting and displaying data from a C2960L-8PS switch running IOS 15. But not from any access point (AP). I've setup syslogging from the ME directly to a single instance Splunk demo lab running on Ubuntu with rsyslog. I can see data being logged into /data/syslog/192.168.40.20/ -rw-r--r-- 1 syslog syslog 9690 Sep 4 15:54 20230904-15.log -rw-r--r-- 1 syslog syslog 41100 Sep 4 16:58 20230904-16.log -rw-r--r-- 1 syslog syslog 9192 Sep 4 17:53 20230904-17.log Example of syslog messages are: 2023-08-29T05:48:04.090627+00:00 <133>SampleSite: *emWeb: Aug 29 07:48:03.431: %AAA-5-AAA_AUTH_ADMIN_USER: aaa.c:3334 Authentication succeeded for admin user 'example' on 100.40.168.192 2023-09-04T17:01:52.684140+02:00 <44>SampleSite: *apfMsConnTask_0: Sep 04 17:01 :52.495: %APF-4-PROC_ACTION_FAILED: apf_80211k.c:825 Could not process 802.11 Ac tion. Received RM 11K Action frame through incorrect AP from mobile station. Mob ile:1A:4A:FA:F9:BA:C6. 2023-09-04T17:01:52.718781+02:00 <44>SampleSite: *Dot1x_NW_MsgTask_0: Sep 04 17 :01:52.530: %LOG-4-Q_IND: apf_80211k.c:825 Could not process 802.11 Action. Rece ived RM 11K Action frame through incorrect AP from mobile station. Mobile:1A:4A: FA:F9:BA:C6. I've installed TA-cisco_ios from Splunkbase. In the top of my etc/apps/search/local/inputs.conf I've added: [monitor:///data/syslog/udp/192.168.40.20] disabled = false host = ciscome.example.net sourcetype = cisco:wlc #sourcetype = cisco:ap index = default For switches cisco:ios works fine, but I cannot get cisco:wlc or cisco:ap to process data it seems. Has anyone used Cisco Mobility Express with Splunk and gotten anything usefull out of the logs? Am I doing it right? Thanks for any tips.
The Monitoring Console can tell you that.  It's under Indexing->License Usage->Historic License Usage.  Select "Index" from the "Split by" dropdown.