All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Any ideas as to how to install the nest for splunk app, if its not visable when browsing for it in splunk console?,Any Ideas as to how nest app for splunk can be installed if its not showing up when ... See more...
Any ideas as to how to install the nest for splunk app, if its not visable when browsing for it in splunk console?,Any Ideas as to how nest app for splunk can be installed if its not showing up when you browse for the app?
Hi all, I use splunk forwarder to read ossec alert logs and index them on splunk. I'm using all the latest versions. But, it only saves ossec logs as raw events and no field is extracted! As the os... See more...
Hi all, I use splunk forwarder to read ossec alert logs and index them on splunk. I'm using all the latest versions. But, it only saves ossec logs as raw events and no field is extracted! As the ossec add-on is old, is there any way to make that work with new ossec versions to correctly index windows and Linux logs that are sent using the forwarder? TNX
So I need a start/chart/timechart etc... that shows a distinct count of separate login ids from 7:55 - 8:54:59 then 8:55 - 9:54.59 and the previous week for that same hour, and the week before that f... See more...
So I need a start/chart/timechart etc... that shows a distinct count of separate login ids from 7:55 - 8:54:59 then 8:55 - 9:54.59 and the previous week for that same hour, and the week before that for the same hour. I seemed to be having good luck with timechart & chart as long as I start on the hour but when I try to adjust by five minutes everything goes crazy (probably my ignorance which I'm hoping I can overcome here). I would prefer to learn but I could also reverse engineer a good answer. The closest I've come to a correct answer is: index=logs* "EventStreamData.eventName"=RetrieveCustomerAccounts EventStreamData.args.request.apiKey=MAIN "EventStreamData.response.entries{}.product.productName"="Checking" | eval week1=relative_time(now(),"-1w@w") | eval Hour = strftime(_time,"%H:%M") | timechart span=60m dc(EventStreamData.args.request.userId) as 360_Volume But the output is on the hour not shifted by five minutes and something strange happens with the first block of data. 2020-04-10 05:00 709 2020-04-10 06:00 10502 2020-04-10 07:00 16122 2020-04-10 08:00 20273 Thanks in advance.
I have streaming data, including fields called APPID and DURATION , here DURATION is the duration in ms for the APPID . Now, I want a timechart which shows only the top 3 appids by aver... See more...
I have streaming data, including fields called APPID and DURATION , here DURATION is the duration in ms for the APPID . Now, I want a timechart which shows only the top 3 appids by average DURATION . So, assume there are 10 events split across 4 APPIDs , something like this: APPID DURATION appid1 10 appid2 20 appid3 30 appid1 40 appid1 50 appid4 60 appid2 70 appid4 80 appid3 90 appid2 100 In this case my avg of the timechart should show only APPIDs 2 , 3 , and 4 since the average duration of these 3 are the top 3. I know using eventstats i can get the avg by APPID , but am stuck at the ranking stage,,,
I recently wiped my server clean of all Splunk files to start fresh with 8.0.3. I am able to forward data from my Windows machine using Sysmon. All sourcetypes show up correctly in Splunk with no i... See more...
I recently wiped my server clean of all Splunk files to start fresh with 8.0.3. I am able to forward data from my Windows machine using Sysmon. All sourcetypes show up correctly in Splunk with no issues. The problem arises when I start to search. For any search that includes the equals sign "=", the search returns nothing. For example, EventID = 4104 returns 0 events, but EventID 4104 returns eight events. Is this a new feature? This is happening on every search that requires an equals sign. Any help is appreciated.
index=_internal host=abc123 source="metrics.log" group=tcpin_connections fwdType=uf |dedup hostname |table hostname i am putting hostname= xyz578(output of above query) in the below query i... See more...
index=_internal host=abc123 source="metrics.log" group=tcpin_connections fwdType=uf |dedup hostname |table hostname i am putting hostname= xyz578(output of above query) in the below query index=* host=abc123 "xyz578" but not getting any output please help me with this missing part.
Hi, Does anyone know where may I find official documentation which will help me to resolve this problem? I have renewed a certificate using this tutorial, but for some reason, MongoDB is sti... See more...
Hi, Does anyone know where may I find official documentation which will help me to resolve this problem? I have renewed a certificate using this tutorial, but for some reason, MongoDB is still not starting. https://splunkonbigdata.com/2019/07/03/failed-to-start-kv-store-process-see-mongod-log-and-splunkd-log-for-details/ mongodb.log is showing this error...   2020-04-11T10:27:08.899Z W CONTROL [main] net.ssl.sslCipherConfig is deprecated. It will be removed in a future release. 2020-04-11T10:27:08.902Z F NETWORK [main] The provided SSL certificate is expired or not yet valid. 2020-04-11T10:27:08.902Z F - [main] Fatal Assertion 28652 at src/mongo/util/net/ssl_manager.cpp 1145 2020-04-11T10:27:08.902Z F - [main] ***aborting after fassert() failure   Can anyone here help? Cheers Konrad
Hi Everyone, We have some security issues raised in that we want to make All the cookies with secure flag and Set the SameSite attribute to Strict or Lax. Is any configuration settings provided ... See more...
Hi Everyone, We have some security issues raised in that we want to make All the cookies with secure flag and Set the SameSite attribute to Strict or Lax. Is any configuration settings provided by Splunk? Please help me out with this.
I have one data model accelerated which contains 5 event datasets with simple fields conditions. Now when I try to just find out count using tstats count from datamodel=X.Y1 where source=A It didn'... See more...
I have one data model accelerated which contains 5 event datasets with simple fields conditions. Now when I try to just find out count using tstats count from datamodel=X.Y1 where source=A It didn't seem to be accelerated compared to running it on other 4 datasets. Then I tried passing summariesonly=t in tstats and I found 0 count for that 1 dataset except others were giving correct count. Why is this the case for this one dataset (i.e it is not summarized?). What might be the cause of this? Edit(11/04/2020): I deleted all other 4 datasets and rebuilt the acceleration and it turns out it worked. Maybe I am hitting some limits on datamodel which I am not aware of? This is not random behavior I did this 2 times on 2 different instances. Edit(13/04/2020): Found these error in search.log which suggests that it is not finding tsdix files for this particular dataset means it is not summarized for some reason even though the data model shows acceleration status as 100% (Note: I've removed some parts from the log string): st_select_handle_new found no TSIDX files warm_rc=[0,0] errno=18 and Mixed mode is disabled, skipping search for bucket with no TSIDX data: E:\Splunk\indexes\db_1586485508_1586389645_42 Edit (14/04/2020): I was able to fix this issue by simply changing the name of the dataset that was not getting accelerated (it was named executed_background_jobs and I renamed it to batch_jobs_exec). It's still a mystery why it didn't work with the previous name. Turns out the above fix was a random behavior. It again getting issues creating tsidx files. Edit (17/04/2020): If anyone knows what are the factors that can cause this issue I would appreciate the help. There are no troubleshooting docs or guide for the data model acceleration whatsoever. Edit(22/04/2020): Found on the monitoring console that acceleration searches are being skipped 92% of the time for this dataset (in the last 4 hours) and others are also getting skipped (all of them above 60% skip ratio). After looking at the answer by @sowings (link) I came to know that each root dataset has its own job (which can be seen in monitoring console as well) and as I have 5 root datasets there are 5 acceleration report searches in total running every 5 mins (set in acceleration settings). As the VM has 4 cores and has concurrent job limit set to 5, its sure that most of them are going to get skipped. In the monitoring console it also seems that highest miss ratio was for this specific one data set (approx 92% for the last 4 hours, maybe because it has large amount of data? not sure). This also explains my previous edit where I tried to accelerate datamodel with only single root dataset. As it would only need one job, it won't be bothered by the skipping issue. I am currently trying the allow_skew parameter available in the datamodels.conf with reducing the cron for the jobs as well and will post the result update here. Reference: https://answers.splunk.com/answers/543887/accelerated-data-model-100-complete-even-though-mo.html Thanks, Harsh
How do i send journal logs to splunk?? journalctl -u servicename Here journal logs are raw logs. Will splunk read raw logs ? Configuration setup on my bos-server1 root@bos-server1#/opt/sp... See more...
How do i send journal logs to splunk?? journalctl -u servicename Here journal logs are raw logs. Will splunk read raw logs ? Configuration setup on my bos-server1 root@bos-server1#/opt/splunkforwarder/etc/apps/linux_auth/default# cat inputs.conf [monitor:///var/log/auth*.log] sourcetype = linux_authlog index = linux_log disabled = false [monitor:///var/log/syslog] sourcetype = linux_syslog index = linux_log disabled = false Below journal logs location: root@bos-server1:/run/log/journal/112824edd9f56398bab569035733662e# pwd /run/log/journal/112824edd9f56398bab569035733662e root@bos-rndapp02:/run/log/journal/112824edd9f56398bab569035733662e# ls -al total 344472 drwxr-s---+ 2 root systemd-journal 220 Jan 21 13:40 . drwxr-sr-x 3 root systemd-journal 60 Sep 21 08:06 .. -rw-r-----+ 1 root systemd-journal 41943040 Jan 14 02:57 system@dcf33424670b4269a8a8b1b6b5b86200-000000000043823d-00059bfe728bd765.journal -rw-r-----+ 1 root systemd-journal 42151936 Jan 15 01:07 system@dcf33424670b4269a8a8b1b6b5b86200-0000000000443355-00059c14f222a797.journal
I have the Trial version of Splunk with the Microsoft 365 App. How do I link Office 365 with Splunk.
Hi, I need to extract multiple fields (from events that are coming via HEC) and assign an index based on the concatenated values. (I know that you can assign index per HEC token, but let's assume... See more...
Hi, I need to extract multiple fields (from events that are coming via HEC) and assign an index based on the concatenated values. (I know that you can assign index per HEC token, but let's assume that all the events are coming with the same token) Example payload { "sourcetype": "hec:generic", "event": { "platform": "platform01", "service": "service02", "env": "npd", "type": "alert", "test": "true", "message": "TEST HEC 040" } } i've figured out how to do it if I know the order props.conf [hec:generic] TRANSFORMS-index_selector = index_selector transforms.conf [index_selector] REGEX = platform"\s?:\s?"(?P<platform>\w+)",\n\s*"service"\s?:\s?"(?P<service>\w+)",\n\s*"env"\s?:\s?"(?P<env>\w+) DEST_KEY = _MetaData:Index FORMAT = $1_$2_$3 But what if I don't know the order of the platform , service and env fields? Any suggestions? Can I somehow have 3 separate REGEX lines? Tried below REGEX = platform"\s?:\s?"(?P<platform>\w+) REGEX = service"\s?:\s?"(?P<service>\w+) REGEX = env"\s?:\s?"(?P<env>\w+) DEST_KEY = _MetaData:Index FORMAT = $1_$2_$3 and the result it just picks up the last REGEX, so basically it tries to assign "npd_$2_$3" to index
I've looked through several of the other posts on answers regarding this problem and I think I've tried all the suggestions, so here's my post: I have a script I put $SPLUNK_HOME/etc/apps/search/b... See more...
I've looked through several of the other posts on answers regarding this problem and I think I've tried all the suggestions, so here's my post: I have a script I put $SPLUNK_HOME/etc/apps/search/bin as below: splunk@splunk1:/opt/splunk/etc/apps/search/bin$ ll freq.py -r-xr-xr-x 1 splunk splunk 657 Apr 10 20:33 freq.py* It runs fine when testing with splunk python: splunk@splunk1:/opt/splunk/etc/apps/search/bin$ /opt/splunk/bin/splunk cmd python ./freq.py splunk.com domain,frequency splunk.com,5.96996388594 I created a transforms.conf in $SPLUNK_HOME/etc/apps/search/local as below: splunk@splunk1:/opt/splunk/etc/apps/search/local$ cat transforms.conf [freqserver] external_cmd = freq.py domain external_type = external fields_list = domain, frequency Made sure it had the right linux permissions and owner: splunk@splunk1:/opt/splunk/etc/apps/search/local$ ll total 20 drwx------ 2 splunk splunk 4096 Apr 10 20:56 ./ drwxr-xr-x 10 splunk splunk 4096 Mar 10 21:03 ../ -rw------- 1 splunk splunk 807 Mar 30 00:49 indexes.conf -rw------- 1 splunk splunk 122 Mar 10 21:49 inputs.conf -rw------- 1 splunk splunk 101 Apr 10 20:56 transforms.conf In the lookup definition, for permissions, it says that object should appear in all apps and everyone has read and write permissions. I performed all the above as the admin of a single instance of Splunk. I restarted Splunk. So now I run a search: index="bro" earliest=-1y sourcetype=bro_dns | fields query | rename query as domain | lookup freqserver domain but I get the following error: Error in 'lookup' command: Could not construct lookup 'freqserver, domain'. See search.log for more details. This is on splunk Version: 8.0.2 I was trying to follow these instructions for creating a new external lookup: https[:]//docs.splunk.com/Documentation/Splunk/latest/Knowledge/Configureexternallookups That error is the same error I get if I try a lookup name that does not exist: index="bro" earliest=-1y sourcetype=bro_dns | fields query | rename query as domain | lookup nonsensename domain would get the same kind of Could not construct lookup error... Any suggestions?
Hi guys, The team has created this search To Alerts when a host has an infection that has been re-infected remove multiple times over multiple days. but we are not sure what we could do to get ... See more...
Hi guys, The team has created this search To Alerts when a host has an infection that has been re-infected remove multiple times over multiple days. but we are not sure what we could do to get better results because alerts are coming a lot when we did this : | tstats summariesonly=true allow_old_summaries=true dc(Malware_Attacks.date) as "day_count",count from datamodel="Malware"."Malware_Attacks" by "Malware_Attacks.dest","Malware_Attacks.signature","Malware_Attacks.action" | rename "Malware_Attacks.dest" as "dest","Malware_Attacks.signature" as "signature","Malware_Attacks.action" as action | where 'day_count'>5 | fields dest signature action day_count count | search signature!=unknown Also do u guys know if the last pipe will break the search since it is a data model search
I am using SETNULL and SETPARSING to include and exclude log events. Here is the files - Props.conf [OktaIM2:log] TRANSFORMS-set= setnull,setparsing transforms.conf [setnull] REGEX=gma... See more...
I am using SETNULL and SETPARSING to include and exclude log events. Here is the files - Props.conf [OktaIM2:log] TRANSFORMS-set= setnull,setparsing transforms.conf [setnull] REGEX=gmail.com DEST_KEY=queue FORMAT=nullQueue [setparsing] REGEX=yahoo.com DEST_KEY=queue FORMAT=indexQueue SETNULL filter works well, but not SETPARSING one. I tried following - 1) changed order to setparsing,setnull in props.conf restarted splunk after making changes Any insights why INCLUDE filter is not working as expected ?
I'm having difficulty understanding why Query 2 is returning a different count than the other two queries. The cluster_name field is an indexed field. Does anyone know why this is occurring and hav... See more...
I'm having difficulty understanding why Query 2 is returning a different count than the other two queries. The cluster_name field is an indexed field. Does anyone know why this is occurring and have any suggestions to resolve or troubleshoot this issue? Query 1 index=infra_k8s* sourcetype="kube:objects:pods" | stats count by cluster_name cluster_name count ... clusterxyz 436 ... Query 2 index=infra_k8s* sourcetype="kube:objects:pods" cluster_name=clusterxyz | stats count by cluster_name cluster_name count clusterxyz 68 Query 3 index=infra_k8s* sourcetype="kube:objects:pods" | eval cluster=cluster_name | search cluster="clusterxyz" | stats count by cluster cluster count clusterxyz 436
I have 2 sources in separate indexes; the first contains a field "appId"; to get the human readable (appDisplayName) I need to search the 2nd source. Normally I'd do this with a subsearch: index=i... See more...
I have 2 sources in separate indexes; the first contains a field "appId"; to get the human readable (appDisplayName) I need to search the 2nd source. Normally I'd do this with a subsearch: index=index2 sourcetype=st2 [search index=index1 sourcetype=st1 | stats values(appId) as appId |format] |stats values(appDisplayName) as appDisplayName by appId The problem I'm running into is the ID for the app in the first source is always called "appId" but depending on the type of app (which i don't know from the first source); appId will either correspond to 1 of 2 fields in the second source; it will either be "appId" or "resourceId". I need to find "appId" from the 1st source in either "appId" or "resourceId" in the second source and the corresponding human readable will either be "appDisplayName" or "resourceDisplayName" in the second source. Any ideas on how to approach this? Thanks!
Hi AppD Team - I am following the instructions at the link  https://docs.appdynamics.com/display/PRO44/Install+the+.NET+Core+Microservices+Agent+for+Windows As per the instruction, we need to set ... See more...
Hi AppD Team - I am following the instructions at the link  https://docs.appdynamics.com/display/PRO44/Install+the+.NET+Core+Microservices+Agent+for+Windows As per the instruction, we need to set the following environment variables and the path should be the Complete path.  Set up your Environment Variables for your system application: CORECLR_ENABLE_PROFILING=1 CORECLR_PROFILER={39AEABC1-56A5-405F-B8E7-C3668490DB4A} CORECLR_PROFILER_PATH_32=<actual_path>\AppDynamics.Profiler_x86.dll CORECLR_PROFILER_PATH_64=<actual_path>\AppDynamics.Profiler_x64.dll Where <actual_path> is the complete path to the AppDynamics.Profiler dll. I have 6 APIs hosted on that server for each I want to setup AppD, each api have there own set of AppDynamics.profiler DLLs. and corresponding .config files. How to set Environment variables for all 6 services?  as if same variables you add it will consider last entry i believe?
Given a log with this format, how do you graph HTTP methods? Apr 10 13:21:19 ip-10-245-220-105 de0df3ba02a9[1256]: {"userId":"[REDACTED]","url":"/this/is/the/url","headers":{"host":"host.elb.amazo... See more...
Given a log with this format, how do you graph HTTP methods? Apr 10 13:21:19 ip-10-245-220-105 de0df3ba02a9[1256]: {"userId":"[REDACTED]","url":"/this/is/the/url","headers":{"host":"host.elb.amazonaws.com","accept":"application/my_application+json","referer":"https://send_referral/"},"requestId":"RequestID","oktaId":"oktaID","method":"GET","queryParams":{},"level":"info","message":"api-request","label":"qpp-cmswi-api-prod","timestamp":"2020-04-10T17:21:19.763"} That is a graph of PUT, POST, DELETE, PATCH , and GET operations.
Eg : index = userinformation _raw doesnt have any field or value related to field "ue". But "ue" is being shown in Interesting Fields. ue = abc@splunk.com ue = xyz@splunk.com So my q... See more...
Eg : index = userinformation _raw doesnt have any field or value related to field "ue". But "ue" is being shown in Interesting Fields. ue = abc@splunk.com ue = xyz@splunk.com So my questioning is what is generating this field to exist in the index?