All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm working with ForeScout Audit Policy events. Some of them have this in the message, Part (1/n), Part (2/n), and so on. I'm using the transaction command below to join the parts. index=networ... See more...
I'm working with ForeScout Audit Policy events. Some of them have this in the message, Part (1/n), Part (2/n), and so on. I'm using the transaction command below to join the parts. index=network sourcetype="forescout:audit" partOf=* | transaction fields=partOf maxspan=1s | search eventtype=fs_policy_change | append [search index=network sourcetype=forescout:audit NOT partOf=* eventtype=fs_policy_change] | sort - _time The field partOf is set in default/transforms.conf [fs_get_parts] REGEX = \|\sPart\s\((?<numPart>\d{1,3})\/(?<partOf>\d{1,3})\)\s\| The append adds the single event policy changes. The issue is the order is sometimes correct and other times not. For example I will get Part (4/4), Part (2/4), Part (1/4), and Part (3/4) for some of the transactions and others in the correct order. I didn't see anything in the transaction command to allow me to sort the partOf. Any ideas? Splunk Enterprise 7.2.5.1 TIA, Joe
I have a situation where in the span of 10 mins there could be a possibility that we didn't get any data from one of the sourcetype for one interval but started getting data for next interval, by thi... See more...
I have a situation where in the span of 10 mins there could be a possibility that we didn't get any data from one of the sourcetype for one interval but started getting data for next interval, by this way I am loosing data in summary index. Any suggestion would be helpful. Here's a part of my query: | metadata type=sources index=abc | search source=random | eval earliest=lastTime - 300 | eval latest=now() | fields earliest latest So this random source is collecting data from all the sourcetypes.
Hello I'm trying to reroute certain events as it hits my indexer from a particular source. In the inputs.conf on the UF, the index is set to index=tokens for my source path, but I want to catch ce... See more...
Hello I'm trying to reroute certain events as it hits my indexer from a particular source. In the inputs.conf on the UF, the index is set to index=tokens for my source path, but I want to catch certain events from this source and route to a different index at the indexer. So far three events have gotten past my transform and I'm trying to figure out why and what I'm doing wrong. Below is my original props and transforms props.conf [source::...redacted] TRANSFORMs-mbox_token_reroute = reroute transforms.conf [reroute] REGEX=reg FORMAT=mbox_tokens SOURCE_KEY=MetaData:Source DEST_KEY=_MetaData:Index This is what I just changed it to and waiting to see if events are rerouted once the trigger action happens. props.conf [source::...redacted] TRANSFORMs-mbox_token_reroute = reroute transform.conf [reroute] REGEX=reg FORMAT=mbox_tokens SOURCE_KEY=MetaData:Source DEST_KEY=_MetaData:Index WRITE_META=true What should I do to make sure that the events are getting rerouted?
After upgrading from version 7.0.1 to 8.0.2, the errors below appear. Splunk is not indexing some internal logs like license_usage.log, and license consumption has increased a lot, but I think it is... See more...
After upgrading from version 7.0.1 to 8.0.2, the errors below appear. Splunk is not indexing some internal logs like license_usage.log, and license consumption has increased a lot, but I think it is the splunk's own log. BatchReader-0 Root Cause(s): The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data. Last 50 related messages: 03-05-2020 09:32:47.238 -0300 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 03-05-2020 09:32:45.582 -0300 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 03-05-2020 09:32:37.979 -0300 INFO TailReader - State transitioning from 1 to 0 (initOrResume). 03-05-2020 09:32:37.979 -0300 INFO TailReader - State transitioning from 1 to 0 (initOrResume). 03-05-2020 09:32:37.971 -0300 INFO TailReader - tailreader0 waiting to be un-paused 03-05-2020 09:32:37.971 -0300 INFO TailReader - Starting tailreader0 thread 03-05-2020 09:32:37.968 -0300 INFO TailReader - Registering metrics callback for: tailreader0 03-05-2020 09:32:37.969 -0300 INFO TailReader - batchreader0 waiting to be un-paused 03-05-2020 09:32:37.969 -0300 INFO TailReader - Starting batchreader0 thread 03-05-2020 09:32:37.969 -0300 INFO TailReader - Registering metrics callback for: batchreader0 TailReader-0 Root Cause(s): The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data. Last 50 related messages: 03-05-2020 09:32:47.238 -0300 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 03-05-2020 09:32:45.582 -0300 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 03-05-2020 09:32:37.979 -0300 INFO TailReader - State transitioning from 1 to 0 (initOrResume). 03-05-2020 09:32:37.979 -0300 INFO TailReader - State transitioning from 1 to 0 (initOrResume). 03-05-2020 09:32:37.971 -0300 INFO TailReader - tailreader0 waiting to be un-paused 03-05-2020 09:32:37.971 -0300 INFO TailReader - Starting tailreader0 thread 03-05-2020 09:32:37.968 -0300 INFO TailReader - Registering metrics callback for: tailreader0 03-05-2020 09:32:37.969 -0300 INFO TailReader - batchreader0 waiting to be un-paused 03-05-2020 09:32:37.969 -0300 INFO TailReader - Starting batchreader0 thread 03-05-2020 09:32:37.969 -0300 INFO TailReader - Registering metrics callback for: batchreader0
I'm trying to create a custom source type which is reading a TSV log file and the 3 column in the file is a JSON payload wrapped in quotes. I can't figure out how to get the source type to parse out ... See more...
I'm trying to create a custom source type which is reading a TSV log file and the 3 column in the file is a JSON payload wrapped in quotes. I can't figure out how to get the source type to parse out the 3 column in a JSON format on splunk. Here's an example of a line entry below. 6680 "2020-03-06 13:50:13.254" "{"date":"3/6/2020 1:50:13 PM","received":"from FooServer (Unknown [172.20.36.5]) by smtp-dev.foo.com with ESMTP ; Fri, 6 Mar 2020 13:50:13 -0500","message-id":"id@message.com","from":"foo@thisMachine.com","recipients":"John.Smith@example.com","cc":"","subject":"Test Email"}" Any advice would be helpful, thank you.
Hi all, Right now I'm just trying to deploy a Docker container with Splunk installed from an image built from source (from Splunk's GH page: https://github.com/splunk/docker-splunk). The custom e... See more...
Hi all, Right now I'm just trying to deploy a Docker container with Splunk installed from an image built from source (from Splunk's GH page: https://github.com/splunk/docker-splunk). The custom elements will come later, I'm just trying to get the default splunk-centos-7 image to work. I'm running inside a Centos 8 VM with Docker installed. I believe this is a configuration issue, but I can't find anything online mentioning what to do. My workflow from inside the cloned repo dir: Make the image (this is straight from the master branch) sudo make splunk-centos-7 This successfully builds the image. Then, I run (with the proper password): sudo docker run -it -p 8000:8000 -e "SPLUNK_PASSWORD=<password>" -e "SPLUNK_START_ARGS=--accept-license" <image ID> This causes entrypoint.sh to eventually run ansible-playbook $ANSIBLE_EXTRA_FLAGS -i inventory/environ.py site.yml This is where my issue is - I get the error: TASK [Provision role] *********************************************************************************************************************************************************************************************************************** [WARNING]: 'splunk' is undefined I've tried the recommendation at https://splunk.github.io/splunk-ansible/EXAMPLES.html#provision-local-standalone and running with a default.yml file, but I get an Ansible error when including the splunk_standalone role. It feels like I'm missing some configuration somewhere. The build succeeds, but trying to run the container fails. Does anyone have any suggestions?
I'd like to know any information available on the timeline to support the upcoming Orlando release of ServiceNow. Historically, Splunk has been slow to certify/fix this ServiceNow Store App with new ... See more...
I'd like to know any information available on the timeline to support the upcoming Orlando release of ServiceNow. Historically, Splunk has been slow to certify/fix this ServiceNow Store App with new ServiceNow versions, so we've taken a "no way" approach to it, since we refuse to be held back from upgrading by a vendor's slow approach to plugin certification. In recent months, Splunk now appears to be taking this more seriously, and re-certification has been happening in fair proximity to new Servicenow releases. We're considering testing this app now in non-prod, and need to consider timing. Is there any information on when the ServiceNow app "Splunk Integration" will be certified with the Orlando release of ServiceNow? Orlando is scheduled to be released next week, and it has been available to customers and partners via the Early Access Program since January.
Hi, I have time format as: 2019-10-08 15:24:40.132 UTC I used eval to strip it to: 2019-10-08 15:24:40 I need to calculate Age. My eval is below but it is not working. Can someone assist... See more...
Hi, I have time format as: 2019-10-08 15:24:40.132 UTC I used eval to strip it to: 2019-10-08 15:24:40 I need to calculate Age. My eval is below but it is not working. Can someone assist pls? | eval age=ceiling((now()-strptime(Event_Created_Time_Date,"%F %H:%M:%S"))/86400) | eval Event_Age=case( age<1,"1_Less than 1 Days", age>=30,"6_Older than 30 Days", age>=20,"5_Older than 20 Days", age>=10,"4_Older than 10 Days", age>=5,"3_Older than 5 Days", age>=2,"2_Older than 2 Days", 0==0,"7_No Age Data")
I have payload as below and I need the StartTime and EndTime values where the payload has the first IsAvailable is equal to true "StatusList": [ { "date": "2020-03-13T00:00:00Z", ... See more...
I have payload as below and I need the StartTime and EndTime values where the payload has the first IsAvailable is equal to true "StatusList": [ { "date": "2020-03-13T00:00:00Z", "status": [ { "StartTime": "2020-03-13T06:30:00Z", "EndTime": "2020-03-13T08:30:00Z", "IsAvailable": false, "score": 91.05 }, { "StartTime": "2020-03-13T08:30:00Z", "EndTime": "2020-03-13T10:30:00Z", "IsAvailable": false, "score": 94.29 }, { "StartTime": "2020-03-13T10:30:00Z", "EndTime": "2020-03-13T12:30:00Z", "IsAvailable": **true**, "score": 100 }, { "StartTime": "2020-03-13T12:30:00Z", "EndTime": "2020-03-13T14:30:00Z", "IsAvailable": true, "score": 96.1 }, { "StartTime": "2020-03-13T14:30:00Z", "EndTime": "2020-03-13T16:30:00Z", "IsAvailable": true, "score": 90.39 }, { "StartTime": "2020-03-13T16:30:00Z", "EndTime": "2020-03-13T18:30:00Z", "IsAvailable": false, "score": 0 } ], How can I achieve this?
I'm trying to create the below search with the following dimensions. I'm struggling to create the 'timephase' column. The 'timephase' field would take the same logic as the date range pickers in th... See more...
I'm trying to create the below search with the following dimensions. I'm struggling to create the 'timephase' column. The 'timephase' field would take the same logic as the date range pickers in the global search, but only summon the data applicable in that timephase (ie. 1 day would reflect data of subsequent columns for 1 day ago, etc). I tried to approach it with a eval case , but would run into a mutual exclusion problem (the data captured in "1 day" would be excluded from "1 week", even thought it should be counted). Does anyone have any recommendation for approaches to this?
Hello, I am running Splunk Add-on for AWS 4.6.1 and Splunk App for AWS 6.0.0. Majority of app panels populated with data, but I also receive this err message on the dashboard: Some panels may no... See more...
Hello, I am running Splunk Add-on for AWS 4.6.1 and Splunk App for AWS 6.0.0. Majority of app panels populated with data, but I also receive this err message on the dashboard: Some panels may not be displayed correctly because the following inputs have not been configured: CloudTrail. Or, the saved search "Addon Metadata - Summarize AWS Inputs" is not enabled on Add-on instance. I have tried to look for this add on and enable it, but I could not find it. Anyone has the same issue and how you resolved it? Thank you,
Is it possible to configure more than 1 cron for one alert? some thing like */2 9-11,11-13 * * 1-4,5-1 , i think the answer is no but wanted reconfirm. The reason i want to know is because alert con... See more...
Is it possible to configure more than 1 cron for one alert? some thing like */2 9-11,11-13 * * 1-4,5-1 , i think the answer is no but wanted reconfirm. The reason i want to know is because alert condition is same but the triggering times will differ based upon day and hours
Is there a comparison between ES and Chronicle Security of Google? A top official here wonders about it.
hi The search below returns me 558 events `CPU` | stats values(SITE) as SITE count(process_cpu_used_percent) as "Number of CPU alerts" by host | rename host as Hostname, SITE as Site | search... See more...
hi The search below returns me 558 events `CPU` | stats values(SITE) as SITE count(process_cpu_used_percent) as "Number of CPU alerts" by host | rename host as Hostname, SITE as Site | search Hostname=9831 I am doing the same stats in a subsearch and in this case I have 4389 events! `wire` earliest=-7d latest=now | stats last(AP_NAME) as "Access point", last(Building) as "Geolocation building" by host | join host type=outer [| `CPU` earliest=-7d latest=now | stats values(SITE) as Site , count(process_cpu_used_percent) as "Number of CPU alerts" by host ] | rename host as Hostname | search Hostname=9831 What explain a such difference even if i use the same stats count What I have to do in order to have the same number of events in the search and in a subsearch? Unless it is not possible to have the same number of events in the subsearch? Thanks for your help
I have a table: PageID, UserName, Date, count of hits to that page, I would like to find the average daily page hits , per article at a UserID level. (for the top 100 most frequently vie... See more...
I have a table: PageID, UserName, Date, count of hits to that page, I would like to find the average daily page hits , per article at a UserID level. (for the top 100 most frequently viewed pages) So for example, Person xyz , on average views page x , n number of times per day over the last week. This is the start of the query... ... | bucket span=1d _time| stats count by PageID, UserName , _time | sort - count |head 100 Any help much appreciated.
hi I need to understand why I execute the first search I have much more events in "Number of CPU alerts" count than in the second search? As you can see, the first search stats the data "by host S... See more...
hi I need to understand why I execute the first search I have much more events in "Number of CPU alerts" count than in the second search? As you can see, the first search stats the data "by host SITE" while the second stats the data only by host What I dont understand is that every host has is proper SITE so normaly like I am doing the same kind of count i should have the same result? Thanks for your help `CPU` | stats count(process_cpu_used_percent) as "Number of CPU alerts" by host SITE | search host=TUTU
Hi all, I'm calculating the average electrical energy consumption per produced piece from today of one of our production machines. Then I want to know the percentage in which this value differs... See more...
Hi all, I'm calculating the average electrical energy consumption per produced piece from today of one of our production machines. Then I want to know the percentage in which this value differs from the average of the last 30 days. The average of the last 30 days is stored in a summary index as one value per day. For just this moment, the verified value of the total electrical energy consumption (field "elEnergie1") is 18 kWh. But every new search returns a value which alters between 55 and 65 kWh, sometimes around 22 too. This is the code: index=machinedata_w05 source=W05WTSema4IV320732 name=S3.Energiedaten.Wirkenergie_Tag OR name=S7.Prozessdaten.wst_gesamt earliest=@d | eval {name}=value | eval day = strftime(_time, "%d.%m.%Y") | rename S3.Energiedaten.Wirkenergie_Tag as elEnergie1 S7.Prozessdaten.wst_gesamt as Stk | table _time elEnergie1 day Stk | filldown elEnergie1 | autoregress elEnergie1 | table _time elEnergie1 elEnergie1_p1 day Stk | where elEnergie1!="" OR elEnergie1_p1!="" OR Stk!="" | eval diff=elEnergie1_p1-elEnergie1 | table _time elEnergie1 elEnergie1_p1 diff day Stk | where diff>0 OR Stk!="" | stats first(_time) as _time sum(diff) as elEnergie1 range(Stk) as Stk by day | where Stk!=0 | eval elEnergie1Stk = round(elEnergie1/Stk, 5), elEnergie1=round(elEnergie1, 2) | table elEnergie1Stk elEnergie1 Stk | fields - _time | append [ | search index=machinedata_w05_sum app=Medienverbrauch medium="el.Energie" machine=Sema4 earliest=-30d | stats avg(Verbrauch_elEnergie_pro_Stk) as avgliter | fields avgliter | eval avgliter=round(avgliter, 5) ] | filldown | eval abw = round((if(isnull(elEnergie1Stk), -1, elEnergie1Stk)-avgliter)/avgliter*100, 1) | table abw elEnergie1Stk avgliter elEnergie1 Stk | where abw!="" | eval abw=if(abw<100, "---", abw) After some trial end error I found out that if I just delete the append command, the energy consumption is calculated correctly (the depending calculations are then of course wrong, but that's not the point). So my question is: Why does removing/adding the append command changes the value of a previous calculated field? I have absolutely no idea what is happening here!?!? (If you want to know more about what the whole search does just ask me)
Can anyone help me with this? The logs now are in Apex Central format, as TMCM now is Apex Central. However it seems the logs changed, and the TA for Trend Micro OfficeScan doesn't work porperly... See more...
Can anyone help me with this? The logs now are in Apex Central format, as TMCM now is Apex Central. However it seems the logs changed, and the TA for Trend Micro OfficeScan doesn't work porperly. Is there anyone who made this work? My logs now are as follow: Mar 6 12:01:05 Apex Central:700107 Generated="2020-03-06 11:49:05" Product_Entity/Endpoint="BRAHMS_OSCE" Endpoint="M32124" Managing_Server="BRAHMS" Product="Apex One" Target_Process="C:\Windows\System32\wbem\WmiPrvSE.exe" File_Name="\Device\Harddisk1\DR1" Device_Type="USB storage device" Permission="List device content only"
I'm having an issue because I need to show in a report only the first ticket received by an agent and the latest one, so all the other tickets in the middle I have to leave them behind. Here is the e... See more...
I'm having an issue because I need to show in a report only the first ticket received by an agent and the latest one, so all the other tickets in the middle I have to leave them behind. Here is the evidence: Of all the tickets assigned to user1 or user2, how can I capture only the oldest and newest one? Thanks in advance
Hi, I am currently working on a search that is supposed to tell me whether users went the prescribed CyberARK route or bypassed it for system access. So theoretically I should use for events 4624... See more...
Hi, I am currently working on a search that is supposed to tell me whether users went the prescribed CyberARK route or bypassed it for system access. So theoretically I should use for events 4624 and 4648 and see whether the connctions come from CyberARK or not. But I found plenty of login events from the Citrix servers where our users do their work. Following up on this it turns out, that users on Citrix use a web browser to access an application on the target system that uses SSO for the user login. This also shows up as 4624. Which for my purpose would be a false positive. Looking closer that the generated 4624 events, the key difference is the LogonProcessName and AuthenticationPackageName in the event. If AuthenticationPackageName=NTLM or LogonProcessName=NtLmSsp, then this seems to indicate a SSO login. And AuthenticationPackageName=Kerberos or LogonProcessName=Kerberos seem to be indicators of an RDP session (via CyberARK). Excluding the NtLm events seems to be the way to go, but as my Windows background is pracitcally NIL after years of AIX/Linux I wonder wheter someone could confirm my hypothesis. Unfortunately I do not have a lab for checking this with a control case. thx afx