All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have two event 1 index= non prod source=test.log "recived msg" | fields _time batchid  Event 2 index =non-agent source=test1log "acknowledgement msg" |fields _time batch I'd    Calculate the ... See more...
I have two event 1 index= non prod source=test.log "recived msg" | fields _time batchid  Event 2 index =non-agent source=test1log "acknowledgement msg" |fields _time batch I'd    Calculate the time for start event and end event more then 30 sec    
ワイヤ データを視覚化のため、公式マニュアルをもとにAWS上で環境を構築していますが マニュアルが分かりづらく、難航しています。 以下ディレクトリファイルに必要な設定内容を教えていただけないでしょうか? 初歩的な質問ですがお願いいたします。   マニュアル https://docs.splunk.com/Documentation/StreamApp/8.1.0/DeployStre... See more...
ワイヤ データを視覚化のため、公式マニュアルをもとにAWS上で環境を構築していますが マニュアルが分かりづらく、難航しています。 以下ディレクトリファイルに必要な設定内容を教えていただけないでしょうか? 初歩的な質問ですがお願いいたします。   マニュアル https://docs.splunk.com/Documentation/StreamApp/8.1.0/DeployStreamApp/ConfigureStreamForwarder splunk ver9.0.4   stream ver8.1.0   サーバA Splunk Stream /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf   サーバB Stream Forwarder /opt/streamfwd/local/streamfwd.conf
Exclude given IP from below splunk search query & modified it IP required to exclude: 10.17.1.55 10.17.1.56 10.17.1.57 192.168.216.31 192.168.215.129 192.168.215.99   |tstat... See more...
Exclude given IP from below splunk search query & modified it IP required to exclude: 10.17.1.55 10.17.1.56 10.17.1.57 192.168.216.31 192.168.215.129 192.168.215.99   |tstats summariesonly=true count dc(All_Traffic.dest_ip) as "num_dest_ip",dc(All_Traffic.dest_port) as "num_dest_port", values(sourcetype) as sourcetype, values(All_Traffic.action) as "action" from datamodel="Network_Traffic"."All_Traffic" where (sourcetype="*") (All_Traffic.src_ip=10.0.0.0/8 OR All_Traffic.src_ip=192.168.0.0/16 OR All_Traffic.src_ip=172.16.0.0/12) AND (All_Traffic.dest_ip=10.0.0.0/8 OR All_Traffic.dest_ip=192.168.0.0/16 OR All_Traffic.dest_ip=172.16.0.0/12) by "All_Traffic.src_ip","All_Traffic.dest_port" , _time span=5m |rename "All_Traffic.*" as "*" |sort - count | where num_dest_ip>300 AND dest_port!="0"
Hi,  I want to hide label values from color cells. Based on some condition I given colors to the cells. Could someone please suggest how can I hide.  My current dashboard looks like this. ... See more...
Hi,  I want to hide label values from color cells. Based on some condition I given colors to the cells. Could someone please suggest how can I hide.  My current dashboard looks like this.   Expected result should be below  
Some of the BT's are not discovered in AppDynamics controller due to load not present in the environment. We have two environment Dev & Prod, in dev app team is able to generate load and able to con... See more...
Some of the BT's are not discovered in AppDynamics controller due to load not present in the environment. We have two environment Dev & Prod, in dev app team is able to generate load and able to configure the health rules and dashboards.  Customer is looking the same in production as well, but in prod due to no load the BT's are not discovered, we are unable to meet the exceptions. Can discover BT's in controller without any load? Or is their any other way we can address this requirement. We can't expect much load in production in near future, but we need to finish with health rules and dashboards for the go live.  Thank you in Advance!!
Hi  We have this role in our Splunk instance (can_delete) and that role allows users to delete any data or indexes in Splunk If I select users from settings and try to delete that role from all u... See more...
Hi  We have this role in our Splunk instance (can_delete) and that role allows users to delete any data or indexes in Splunk If I select users from settings and try to delete that role from all users, it shows that this role is deleted but when I refresh the page I see that the role is back again to the users Can someone advise please?   Thanks
index=* success="false" process_name="C:\\Windows\\System32\\svchost.exe" | stats count as failedAttempts by user | sort -failedAttempts index=* success="false" process_name="C:\\Windows\\System... See more...
index=* success="false" process_name="C:\\Windows\\System32\\svchost.exe" | stats count as failedAttempts by user | sort -failedAttempts index=* success="false" process_name="C:\\Windows\\System32\\svchost.exe" | timechart count by user | sort by _time I tried do both query but I'm stuck...Need any guidance, thank you
Does the length of metadata fields and its value such as time, host, source and sourcetype count against license consumption? For example, the following HEC JSON has a length of 212 characters but... See more...
Does the length of metadata fields and its value such as time, host, source and sourcetype count against license consumption? For example, the following HEC JSON has a length of 212 characters but the event (_raw) is only 20 characters, is license calculated against the total json length or _raw length?       { "time":1437522387, "host":"dataserver01.applicationmonitoring.com", "source":"/var/logs/application_monitoring.log", "sourcetype":"application_status", "event":{ "message":"Seems OK" } }      
I have a bar graph that shows the status (Success and failed). I want to display the bar with both values even when there are no results for failed.  Currently, it shows a bar with only success statu... See more...
I have a bar graph that shows the status (Success and failed). I want to display the bar with both values even when there are no results for failed.  Currently, it shows a bar with only success status.     
Hi Splunk Works We're investigating using the Splunk Add-on for Salesforce Streaming API (TA-sfdc-streaming-api) app (v1.0.5 - https://splunkbase.splunk.com/app/5689) . I see it does not curren... See more...
Hi Splunk Works We're investigating using the Splunk Add-on for Salesforce Streaming API (TA-sfdc-streaming-api) app (v1.0.5 - https://splunkbase.splunk.com/app/5689) . I see it does not currently have proxy configuration support in the app.   The business does not want to configure the whole Splunk instance to be proxy enabled and would like to limit it to app configuration. Are there any plans to add this support in the near future? Thanks
Hello! I've been trying to solve this problem for a couple days now but can't seem to figure it out. So basically I want get the total count received for "Field A" today, and get an average of "Fie... See more...
Hello! I've been trying to solve this problem for a couple days now but can't seem to figure it out. So basically I want get the total count received for "Field A" today, and get an average of "Field B" for the past week displayed in a single table/result. Field B is the time Field A was received. I will use this then to determine if Field A arrived on time today, but I also need the total count for other purposes. Example Desired Output Date    Field    Count   AvgTimeReceived TimeReceived   mm/dd/yy    "FieldA"    5       5:00:00              7:00:00 Where columns Date,Field,Count,TimeReceived are from today's events, and AvgTimeReceived is an average for the past 7 days. Thanks!
Hello, Im trying to accumulate and analyze a persons risk score every day, once per day, and only fire when the total score for a given user esceeds a pre-determined threshold for that amount of t... See more...
Hello, Im trying to accumulate and analyze a persons risk score every day, once per day, and only fire when the total score for a given user esceeds a pre-determined threshold for that amount of time has been exceeded. for example, if I have a threshold chart for: 1 day 1 week 2 weeks 3 weeks 1 month 2 months  3 months  etc I want a running total of the all the persons generated risk, but I only want to review it when the accumulated total exceeds the threshold for the given period of time index=summary_events  |bin _time span=1d |table _time,user,base_score    | timechart useother=f span=1d sum(base_score) as total_score by user didnt produce the results I was expecting because it was only giving me the totals for that day, but not the accumulated total. the accum command doesn't seem to take a by clause.   kind of striking out on how to properly approach this.  would love some suggestions?    
I have a dashboard that shows a users dashboards and reports in the app. I can click the object I want and it will call a custom command that uses the REST api to make the permission change. This wor... See more...
I have a dashboard that shows a users dashboards and reports in the app. I can click the object I want and it will call a custom command that uses the REST api to make the permission change. This works fine with the command being invoked in a panel that is hidden until an object is selected. However when I implement a modal pop up that has the REST api call search defined and ran in a .js file, I sometimes get a 404 and 409 error when changing the objects permissions. But the objects' permissions are still successfully changed. Edit: I checked the internal log and when I run the custom command via the javascript file, it calls the REST API 3 times. Running it from a dashboard always runs it once.
There a re many good Apps in Splunk Base and if your asking for compliance some APPS will ask you too make sure your data is "CIM compliant" Mainly the infosec apps and the compliance essentials for... See more...
There a re many good Apps in Splunk Base and if your asking for compliance some APPS will ask you too make sure your data is "CIM compliant" Mainly the infosec apps and the compliance essentials for splunk I have done more searching on this than literally anything for Splunk "So Far" and one thin I can find is a example where they have all details laid out and obvious as to what that looks like. I guess I figured most of the communities looked the same because the data looks the same going in but it feels like rocket Science. I tried to follow things like https://www.deductiv.net/blog/splunk-cim-performance/  but even that has had some fields not show up where I know they should in Infosec App especially. That has me ultimately editing the macro for Authentication but I have also read don't edit this so what gives? Maybe I am going about this the wrong way. So if you can either show me what you env looks like----- OR point me to a place that does splunk CIM compliance fomr a-z in all relevant fields for dummies I would be very interested thanks.
Hi, I have 2 queries , let's call them query_a & query_b. query_a - gives me a table containing all the userAgent's that call a specific endpoint of my service, basically this gives me a list of al... See more...
Hi, I have 2 queries , let's call them query_a & query_b. query_a - gives me a table containing all the userAgent's that call a specific endpoint of my service, basically this gives me a list of all the clients who are on outdated versions of my app, as those are the only ones calling this deprecated endpoint. So this query gives me a list of all the outdated versions of my app that are still being used. query_b - gives me a table containing all the userAgent's for every endpoint of my service. So this query gives me a list of every versions of my app that is being used. here userAgent is a string like this  "app_name/app_version (device_name; device_OS_version)" I am only concerned with the app_version out of this I need to calculate, what percentage of userAgent's given by query_a (clients on outdated app_version) that are part of the results given by query_b (all clients) . I need to do this to figure out how many clients we have using outdated app version. How do I achieve this.   query_a is like this:   index::apps source="/data/log/company/service/SERVICE-PUBLIC-API-access.log" "GET /service/diners/*/orders" | rex "diners/(?<dinerId>.*)/orders HTTP/1.1\" (?<responseStatus>\d\d\d) .. ... \"(?<userAgent>.*?)\"" | dedup userAgent | table userAgent     query_b is like this:   index::apps source="/data/log/company/service/SERVICE-PUBLIC-API-access.log" | rex "HTTP/1.1\" (?<responseStatus>\d\d\d) .. ... \"(?<userAgent>.*?)\"" | dedup userAgent | table userAgent    
Hi, i have many dashboards with combination of classic dashboards and studio dashboards. how to get the  list of dashboards with classic and studio.
I'm running this command in Powershell to try to install a Universal Forwarder on my windows 2019 server    msiexec.exe /i "C:\TEMP\splunkforwarder-9.0.0.1-9e907cedecb1-x64-release.msi" WINEVENTL... See more...
I'm running this command in Powershell to try to install a Universal Forwarder on my windows 2019 server    msiexec.exe /i "C:\TEMP\splunkforwarder-9.0.0.1-9e907cedecb1-x64-release.msi" WINEVENTLOG_APP_ENABLE=0 WINEVENTLOG_SEC_ENABLE=0 WINEVENTLOG_SYS_ENABLE=0 WINEVENTLOG_FWD_ENABLE=0 WINEVENTLOG_SET_ENABLE=0 AGREETOLICENSE=Yes SERVICESTARTTYPE=auto DEPLOYMENT_SERVER="deployment.splunk.uic.edu:8089" /norestart   I'm running into this error and not sure why:   This isntallation package could not be opened. Verify that the package exists and that you can access it, or contact the application vendor to verify that this is a Valid windows installer package.     Is there a better way to install the universal forwarder over the command line?
I have run into some cases where the best path forward was to reinstall a Universal Forwarder and point them to a Deployment Server to have a clean set of configurations. The problem is that if the s... See more...
I have run into some cases where the best path forward was to reinstall a Universal Forwarder and point them to a Deployment Server to have a clean set of configurations. The problem is that if the same paths are monitored after the reinstallation, events could be reindexed. I know that I could potentially make a backup of the $SPLUNK_HOME/var/lib/splunk/fishbucket/ before uninstallation and place it on the new UF (Solved: How can I prevent reindexing events after a reinst... - Splunk Community), but when I read some of the data in these files, I see references to the GUID of the current instance of the UF. Wouldn't this create a conflict with the new GUID generated for the new instance of the UF? How does Splunk treat this inconsistency?
I created a search that shows the count of outages by certain Apps over spans of last 90 days, broken up by last 30, 31-60, and 61-90. I want to configure the drilldown so I can click on the value fo... See more...
I created a search that shows the count of outages by certain Apps over spans of last 90 days, broken up by last 30, 31-60, and 61-90. I want to configure the drilldown so I can click on the value for an app in one of these timespans and show a table with the App name and Alert message. The search operates how I want, just not sure about the drilldown capabilities. Search: index=... sourcetype=... App=* Status="Down" earliest=-90d latest=now() | fields Alert App Status | dedup Alert | stats count(Alert) as "C" by App | join type=left      [search index=... sourcetype=... App=* Status="Down" earliest=-60d latest=now()      | fields Alert App Status      | dedup Alert      | stats count(Alert) as "B" by App] | join type=left      [search index=... sourcetype=... App=* Status="Down" earliest=-30d latest=now()      | fields Alert App Status      | dedup Alert      | stats count(Alert) as "A" by App] | fillnull value=0 A B C | table App A B C | eval B=B - A | eval AB=A + B | eval CC=C - AB | eval Sum=A + B + CC | rename A as "30 Days", B as "31-60 Days", CC as "61-90 Days" | fields App "30 Days" "31-60 Days" "61-90 Days" Results Example: App                    30 Days                    31-60 Days                    61-90 Days                    Sum App1                   5                                7                                          0                                         12 App2                   2                                4                                          10                                       16 My drilldown search will be something like: index=... sourcetype=... App=* Status="Down" | dedup Alert | search App="$click.value2$" | table App Alert Is there a way to set a token to select the time range based on where I click? Any recommendations to get just the values for example the 7 events for "App1" "31-60 Days"?
Hi, My task involves creating a search in datamodel i.e network_traffic, below is the base search how we could convert it to data model search  | tstats summariesonly=t values(All_Traffic.src_i... See more...
Hi, My task involves creating a search in datamodel i.e network_traffic, below is the base search how we could convert it to data model search  | tstats summariesonly=t values(All_Traffic.src_ip) as src_ip, dc(All_Traffic.dest_port) as num_dest_port, values(All_Traffic.dest_port) as dest_port from datamodel=Network_Traffic by All_Traffic.dest_ip | where num_dest_port > 100  | search NOT [| inputlookup  addresses.csv | search (comments =*scanner*) | fields IP AS ALL_Traffic.src_ip | format ] colored in red is not working as expected !!   Thanks..