All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Is it possible to create correlation search in splunk ES app using REST API?
I am looking to monitor a folder audit that contains list of files which gets generated everyday automatically, below is how audit directory looks like(Below are the file names): Activity_Engine_20... See more...
I am looking to monitor a folder audit that contains list of files which gets generated everyday automatically, below is how audit directory looks like(Below are the file names): Activity_Engine_2021-12-18T14.51.04Z Activity_Engine_2021-12-19T02.53.38Z Activity_Engine_2021-12-19T15.00.28Z Activity_Engine_2021-12-20T03.00.30Z Windows Sample:  I am looking to only monitor the latest file and index the logs inside the file but I am not sure how to achieve this? Any help would be appreciated.
how to get splunk ES 7-Day sandbox?
Hi, We are running into an issue where the Splunk eStreamer Technical Add-On keeps crashing when receiving events from our Cisco Firepower instance. The exact error logs being observed on the Spl... See more...
Hi, We are running into an issue where the Splunk eStreamer Technical Add-On keeps crashing when receiving events from our Cisco Firepower instance. The exact error logs being observed on the Splunk side are as follows: 2021-12-17 09:19:23,377 root INFO 'latin-1' codec can't encode character '\u2013' in position 460: ordinal not in range(256) 2021-12-17 09:19:23,384 Writer ERROR [no message or attrs]: 'latin-1' codec can't encode character '\u2013' in position 460: ordinal not in range(256)\n'latin-1' codec can't encode character '\u2013' in position 460: ordinal not in range(256)Traceback (most recent call last):\n File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/baseproc.py", line 209, in receiveInput\n self.onReceive( item )\n File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/baseproc.py", line 314, in onReceive\n self.onEvent( item )\n File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/pipeline.py", line 416, in onEvent\n write( item, self.settings )\n File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/pipeline.py", line 238, in write\n streams[ index ].write( event['payloads'][index] + delimiter )\n File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/streams/file.py", line 96, in write\n self.file.write( data.encode( self.encoding ).decode('utf-8') )\nUnicodeEncodeError: 'latin-1' codec can't encode character '\u2013' in position 460: ordinal not in range(256)\n 2021-12-17 09:19:23,384 Writer ERROR Message data too large. Enable debug if asked to do so. 2021-12-17 09:19:23,385 Writer INFO Error state. Clearing queue We have also updated the TA to the latest version (4.8.3) as noted on the Splunk Add-On page for the app: https://splunkbase.splunk.com/app/3662/ On the HF side, we also increased number of worker processes from 4 to 8 which did not help.  Wondering if anyone experienced the same issues.  Let me know.  Thanks.
When restart the search head,Incident_review very very slow    
お世話になります。 現在、Splunkの8.1.xをwindow serverで運用する予定があり、 以下のデータを定期的にバックアップを取得しようと考えております。 ・設定情報(/SPLUNK_HOME/etc) ・インデックスデータ ・kvstore 質問① Robocopyを利用して以下の手順で増分バックアップを 考えているのですが、可能でしょうか? i.  コマンド"s... See more...
お世話になります。 現在、Splunkの8.1.xをwindow serverで運用する予定があり、 以下のデータを定期的にバックアップを取得しようと考えております。 ・設定情報(/SPLUNK_HOME/etc) ・インデックスデータ ・kvstore 質問① Robocopyを利用して以下の手順で増分バックアップを 考えているのですが、可能でしょうか? i.  コマンド"splunk backup kvstore~"を利用してkvstoreのバックアップを取得 ii.  Robocopyを利用して、以下のファイルのバックアップを取得  ・インデックスデータ(ホットバケツを除く)とkvstore             → ディレクトリ(/SPLUNK_HOME/var/lib/splunk)  ・設定ファイル → ディレクトリ(/SPLUNK_HOME/etc)   質問② Splunkのリストア方法について検証しているのですが、 Splunkの再インストール後、質問①でバックアップを取得したデータを 上書きする事でリストアする事は可能でしょうか?   以上、よろしくお願いいたします。    
Hi, I'm new to creating custom alert action & I'm following the documentations provided by Splunk to create this. While I've got my alert to work, however I couldn't find a mechanism to inject the fo... See more...
Hi, I'm new to creating custom alert action & I'm following the documentations provided by Splunk to create this. While I've got my alert to work, however I couldn't find a mechanism to inject the following two items to my application: The number of items in the search result The actual search query In my use-case I need both of them & I'm not sure how to do that. I tried following another solved answer on similar lines but this hasn't helped me so far. Here's what I did in the savedsearches.conf.:   ..... ..... action.tmc.param.result_count = $job.resultCount$ action.tmc.param.search_query = $job.search$ ..... ..... .....   I've also defined the savedsearches.conf..spec file as follows: ..... ..... action.tmc.param.result_count = <integer> action.tmc.param.search_query = <string> ..... ..... ..... However in my python script, when I print out the configuration sent out, I don't see these two arguments passed. I've restarted Splunk but that hasn't helped either. I would really appreciate if someone can please help guide me to the right direction. Thanks! 
Aloha,  In doing a little research we found a similar thread on Splunk Answers with the possible solution however there are somethings that we need clarifying.  Here's the URL to the Splunk Answers ... See more...
Aloha,  In doing a little research we found a similar thread on Splunk Answers with the possible solution however there are somethings that we need clarifying.  Here's the URL to the Splunk Answers for reference:  https://community.splunk.com/t5/Dashboards-Visualizations/How-do-I-use-a-value-from-a-different-field-in-drilldown/m-p/388556 Basically we have a search results with 5 columns and 10 rows containing random numbers on each cell and the requirement to is click into one of the numbers in the cells and open a new tab to another search or lookup file.   According to the Splunk Answer thread there's an option or variable for $row.column_one$, $row.column_two$, $row.column_three$... that can be used.  Here's snippet of the thread:   Is this true/correct?    How do we set or call these variables to point to a specific row.column.number for the $click.value$ ?     Is this based on or using a <condition> ? Thanks in advanced for your help.  
I am probably asking the most basic question ever, but I'm new to Splunk and just trying to figure out my host url. Examples I'm seeing on the internet regarding my particular use case look something... See more...
I am probably asking the most basic question ever, but I'm new to Splunk and just trying to figure out my host url. Examples I'm seeing on the internet regarding my particular use case look something like http://192.168.1.103:8000, but the only thing I've seen in my envinronment is localhost:8000 which doesn't work for what I need.    Trying to pull a dashboard into a web app, for reference on what I'm attempting to do.
I've got a log file that I am monitoring and where I am using a props.conf on the UF to monitor. I'm using the following settings: UF - props.conf: [my_sourcetype] INDEXED_EXTRACTIONS = JSON   S... See more...
I've got a log file that I am monitoring and where I am using a props.conf on the UF to monitor. I'm using the following settings: UF - props.conf: [my_sourcetype] INDEXED_EXTRACTIONS = JSON   Search head cluster  (via deployer in an app bundle) props.conf: [my_sourcetype] KV_MODE = NONE AUTO_KV_JSON = FALSE   If I run btool on one of the search heads for that sourcetype I get: ADD_EXTRA_TIME_FIELDS = True ANNOTATE_PUNCT = True AUTO_KV_JSON = FALSE BREAK_ONLY_BEFORE = BREAK_ONLY_BEFORE_DATE = True CHARSET = UTF-8 DATETIME_CONFIG = /etc/datetime.xml DEPTH_LIMIT = 1000 DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = false HEADER_MODE = KV_MODE = NONE LB_CHUNK_BREAKER_TRUNCATE = 2000000 LEARN_MODEL = true LEARN_SOURCETYPE = true LINE_BREAKER_LOOKBEHIND = 100 MATCH_LIMIT = 100000 MAX_DAYS_AGO = 2000 MAX_DAYS_HENCE = 2 MAX_DIFF_SECS_AGO = 3600 MAX_DIFF_SECS_HENCE = 604800 MAX_EVENTS = 256 MAX_TIMESTAMP_LOOKAHEAD = 128 MUST_BREAK_AFTER = MUST_NOT_BREAK_AFTER = MUST_NOT_BREAK_BEFORE = SEGMENTATION = indexing SEGMENTATION-all = full SEGMENTATION-inner = inner SEGMENTATION-outer = outer SEGMENTATION-raw = none SEGMENTATION-standard = standard SHOULD_LINEMERGE = True TRANSFORMS = TRUNCATE = 10000 detect_trailing_nulls = false maxDist = 100 priority = sourcetype = termFrequencyWeightedDist = false   I'm not sure what I am missing but I can't get the duplicate field values to cease.
Hello, Looking for some assistance in reconstructing my query, which is currently using | transaction with a traceId value to tie together a couple different sourcetypes/sources. My query runs real... See more...
Hello, Looking for some assistance in reconstructing my query, which is currently using | transaction with a traceId value to tie together a couple different sourcetypes/sources. My query runs really slow, some of the sourcetype log results number in the 200million range so looking to speed it up using | stats by <traceId> instead to get the query to run faster. First source example snippet shows the highlighted traceId and 404 response code i am looking for. time=2021-12-11T23:59:51-07:00 time_ms=2021-12-11T23:59:51-07:00.620+ requestId=-1796576042 traceId=-1796576042 servicePath="/nationalnavigation/" remoteAddr=x.x.x.x clientIp=x.x.x.xclientAppVersion=NOT_AVAILABLE clientDeviceType=NOT_AVAILABLE app_version=- apiKey=somekey oauth_leg=2-legged authMethod=oauth apiAuth=true apiAuthPath=/ oauth_version=1.0 target_bg=default requestHost=services.timewarnercable.com requestPort=8080 requestMethod=GET requestURL="/nationalnavigation/V1/symphoni/event/tmsid/blah.com::TVNF0321206000538347?division=FTWR&lineup=15&profile=sg_v1&cacheID=959&longAdvisory=false&vodId=fort_worth&tuneToChannel=false&watchLive=true&watchOnDemand=true&rtReviewsLimit=0&includeAdult=f" requestSize=835 responseStatus=404 responseSize=420 responseTime=0.405 userAgent="Java/1.xxx" mapTEnabled="F" cClientIp="V-1|IP-x.x.x.x|SourcePort-12345|TrafficOriginID-x.x.x.x" sourcePort="12345" appleEgressEnabled="F" oauth_consumer_key="somekey" x_pi_auth_failure="-" pi_log="pi_ngxgw_access" second source example shows the REST server logs with an exception. 2021-12-11 23:59:51,261 ERROR [qtp1647496677-7239] [-1796576042] [c.t.a.n.r.s.r.s.SymphoniRestServiceBroker.handleNnsServiceErrorHeaders:1363] An internal service error occurred: com.twc.atgw.nationalnavigation.SymphoniWebException: Event Not Found Here's the current query i am looking to improve.     index=vap sourcetype=nns_all OR sourcetype=pi_ngxgw_access "nationalnavigation.SymphoniWebException: Event Not Found" OR "responseStatus=404" | rex "\] \[(?<traceId>.+)\] \[c.t.a.n.r.s.r.s" | transaction keepevicted=true by traceId | search "nationalnavigation.SymphoniWebException: Event Not Found" AND "responseStatus=404" | mvexpand requestURL | search requestURL="/nationalnavigation/V1/symphoni/series/tmsproviderprogramid*" OR "/nationalnavigation/V1/symphoni/event/tmsid*" | eval requestURLLength=len(requestURL) | rex field=requestURL "/nationalnavigation/V1/symphoni/event/tmsid/.*\%3A\%3A(?<queryString>.+)" | eval endpoint=case(match(requestURL,"/nationalnavigation/V1/symphoni/series/tmsproviderprogramid*"), "/nationalnavigation/V1/symphoni/series/tmsproviderprogramid", match(requestURL,"/nationalnavigation/V1/symphoni/event/tmsid*"), "/nationalnavigation/V1/symphoni/event/tmsid",1=1,requestURL) | rex field=queryString "(?<tmsIds>[^?]*)" | rex field=queryString "(?<tmsProviderProgramIds>[^?]*)" | eval assetIds=coalesce(tmsIds,tmsProviderProgramIds) | eval assetCount=mvcount(split(assetIds,",")) | stats count AS TxnCount by endpoint      
Background: I'm working on a form that associates Qualys vulnerability IDs with CVE IDs. I'm leveraging two lookup tables, one Qualys ID centric, the other CVE ID  centric. It's in a 3 panel form wi... See more...
Background: I'm working on a form that associates Qualys vulnerability IDs with CVE IDs. I'm leveraging two lookup tables, one Qualys ID centric, the other CVE ID  centric. It's in a 3 panel form with only one initially visible. If a CVE is clicked in the initial panel, the additional two panels become visible; because there is often a series of CVE IDs associated with a single QID, one panel returns results for the clicked CVE and the other panel returns results for all CVEs in that QID. Initial Pane:     | inputlookup qid-cve.csv | fillnull | search TITLE="$title$" QID=$qid$ CVE=*$cve$* VENDOR_REFERENCE=$vr$ CATEGORY=$category$ | makemv delim=", " CVE       Drilldown Pane 1:     | inputlookup cve-details.csv | rename Name as CVE | table CVE Description References Votes Comments | search CVE=$form.cve$       And the Pane of Problems:     | inputlookup cve-details.csv | rename Name as CVE | table CVE Description References Votes Comments filter | eval filter="$cve_list$" | eval filter=replace(filter, "(CVE\-\d{4}\-\d+)\,?", " OR CVE=\"\1\"") | eval filter=replace(filter, "^ OR ", "") | where CVE=filter     What I'm looking for: Take a comma separated field containing all of the CVEs in a QID and join them together with an " OR CVE=\"$_\""' and directly interpret that as spl passed to a where command Note that CVE's contain heiphens, so in a where that'll make the string be interpretted as subtraction eval when unquoted, so quoting the CVEs is definitely part of the solution. Here's an example of what I need the spl to look like       | inputlookup "cve-details.csv" | rename Name as CVE | table CVE, Description, Refreences, Votes, Comments | where CVE="CVE-2020-13543" OR CVE="CVE-2021-13543" OR CVE="CVE-2020-13584" OR CVE="CVE-2021-13584" OR CVE="CVE-2020-9948" OR CVE="CVE-2021-9948" OR CVE="CVE-2020-9951" OR CVE="CVE-2021-9951" OR CVE="CVE-2020-9983" OR CVE="CVE-2021-9983"       where the variable being expanded holds the string:       "CVE-2020-13543" OR CVE="CVE-2021-13543" OR CVE="CVE-2020-13584" OR CVE="CVE-2021-13584" OR CVE="CVE-2020-9948" OR CVE="CVE-2021-9948" OR CVE="CVE-2020-9951" OR CVE="CVE-2021-9951" OR CVE="CVE-2020-9983" OR CVE="CVE-2021-9983"       Problem: That final panel, the one that returns data for all CVEs in a QID, is proving quite difficult. My query looks like this:     | inputlookup cve-details.csv | rename Name as CVE | table CVE Description References Votes Comments | eval filter=replace("$cve_list$", "(CVE\-\d{4}\-\d+)\,?", " OR CVE=\"\1\"") | eval filter=replace(filter, "^ OR CVE=", "") | where CVE=filter     The query looks like this once it's optimized:     | inputlookup "cve-details.csv" | rename Name as CVE | table CVE, Description, References, Votes, Comments, filter | eval filter=" OR CVE=\"CVE-2021-3587\" OR CVE=\"CVE-2021-3573\" OR CVE=\"CVE-2021-3564\" OR CVE=\"CVE-2021-3506\" OR CVE=\"CVE-2021-3483\" OR CVE=\"CVE-2021-33034\" OR CVE=\"CVE-2021-32399\" OR CVE=\"CVE-2021-31916\" OR CVE=\"CVE-2021-31829\" OR CVE=\"CVE-2021-29650\" OR CVE=\"CVE-2021-29647\" OR CVE=\"CVE-2021-29264\" OR CVE=\"CVE-2021-29155\" OR CVE=\"CVE-2021-29154\" OR CVE=\"CVE-2021-28971\" OR CVE=\"CVE-2021-28964\" OR CVE=\"CVE-2021-28688\" OR CVE=\"CVE-2021-26930\" OR CVE=\"CVE-2021-23134\" OR CVE=\"CVE-2021-23133\" OR CVE=\"CVE-2021-0129\" OR CVE=\"CVE-2020-29374\" OR CVE=\"CVE-2020-26558\" OR CVE=\"CVE-2020-26147\" OR CVE=\"CVE-2020-26139\" OR CVE=\"CVE-2020-25672\" OR CVE=\"CVE-2020-25671\" OR CVE=\"CVE-2020-25670\" OR CVE=\"CVE-2020-24588\" OR CVE=\"CVE-2020-24587\" OR CVE=\"CVE-2020-24586\"", filter=replace(filter,"^ OR CVE=","") | where (CVE == filter)     How can I convince where to stop looking at filter as a string literal? I've even added it to my table results before so it would have a better chance of looking at it as a field.  That did not work, naturally.
Hi there, I've set up a dashboard with various columns, one of them outputs a  number field which has a comma(,) in it. I can remove the comma using the following command rex field=SurveyAnswers mod... See more...
Hi there, I've set up a dashboard with various columns, one of them outputs a  number field which has a comma(,) in it. I can remove the comma using the following command rex field=SurveyAnswers mode=sed "s/\,//g"  where SurveyAnswers is the table name. This works fine in a separate search, however the same command doesn't work when I try to update it in my dashboard and save. Any ideas ??? Thanks
Hello, I have a net 5 WebAPI deployed in a docker linux container. I want to instrument the application in AppDynamics. The application works and I test it everytime with swagger I've tried 2 impl... See more...
Hello, I have a net 5 WebAPI deployed in a docker linux container. I want to instrument the application in AppDynamics. The application works and I test it everytime with swagger I've tried 2 implementations Install the .NET Core Microservices Agent for Windows -> https://docs.appdynamics.com/4.5.x/en/application-monitoring/install-app-server-agents/net-agent/net-microservices-agent/install-the-net-core-microservices-agent-for-windows .NET Core for Linux SDK -> https://docs.appdynamics.com/21.5/en/application-monitoring/install-app-server-agents/net-agent/net-core-for-linux-sdk The first scenario didn't work for me I got a consistent error, a single log line saying "use clr profiler",  The second does nothing at all I have no feedback what so ever and no log is created This is my Docker configuration "Docker": { "commandName": "Docker", "launchBrowser": true, "launchUrl": "{Scheme}://{ServiceHost}:{ServicePort}/Swagger", "environmentVariables": { "CORECLR_PROFILER": "{57e1aa68-2229-41aa-9931-a6e93bbc64d8}", "CORECLR_ENABLE_PROFILING": "1", "CORECLR_PROFILER_PATH": "/app/bin/Debug/net5.0/runtimes/linux-64/native/libappdprofiler.so", "APPDYNAMICS_LOG_PATH": "/app/bin/Debug/net5.0" }, "publishAllPorts": true, "useSSL": true } And this is the app dynamics configuration { "feature": [ "FULL_AGENT" ], "controller": { "host": "MYHOST.saas.appdynamics.com", "port": 443, "account": "myAccount", "password": "mypassword", "ssl": true, "enable_tls12": true }, "application": { "name": "myapplicationname", "tier": "my-appliation-tier", "node": "" }, "log": { "directory": "/app/bin/Debug/net5.0", "level": "ALL" } }
Would love some guidance about what to look for in the logs
I configured the okta identity cloud for splunk App to ingest okta logs into splunk but getting the error message below: 2021-12-21 16:34:35,586 ERROR pid=1375 tid=MainThread file=base_modinput.py:l... See more...
I configured the okta identity cloud for splunk App to ingest okta logs into splunk but getting the error message below: 2021-12-21 16:34:35,586 ERROR pid=1375 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/modular_input/checkpointer.py", line 218, in get record = self._collection_data.query_by_id(key) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/client.py", line 3648, in query_by_id return json.loads(self._get(UrlEncoded(str(id))).body.read().decode('utf-8')) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/client.py", line 3618, in _get return self.service.get(self.path + url, owner=self.owner, app=self.app, sharing=self.sharing, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/binding.py", line 289, in wrapper return request_fun(self, *args, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/binding.py", line 71, in new_f val = f(*args, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/binding.py", line 679, in get response = self.http.get(path, all_headers, **query) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/binding.py", line 1183, in get return self.request(url, { 'method': "GET", 'headers': headers }) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/binding.py", line 1244, in request raise HTTPError(response) solnlib.packages.splunklib.binding.HTTPError: HTTP 503 Service Unavailable -- KV Store is initializing. Please try again later. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/okta_identity_cloud.py", line 68, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/input_module_okta_identity_cloud.py", line 774, in collect_events lastTs = helper.get_check_point((cp_prefix + ":" + opt_metric + ":lastRun")) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/modinput_wrapper/base_modinput.py", line 521, in get_check_point return self.ckpt.get(key) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/utils.py", line 159, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/modular_input/checkpointer.py", line 222, in get 'Get checkpoint failed: %s.', traceback.format_exc(e)) File "/opt/splunk/lib/python3.7/traceback.py", line 167, in format_exc return "".join(format_exception(*sys.exc_info(), limit=limit, chain=chain)) File "/opt/splunk/lib/python3.7/traceback.py", line 121, in format_exception type(value), value, tb, limit=limit).format(chain=chain)) File "/opt/splunk/lib/python3.7/traceback.py", line 508, in __init__ capture_locals=capture_locals) File "/opt/splunk/lib/python3.7/traceback.py", line 337, in extract if limit >= 0: TypeError: '>=' not supported between instances of 'HTTPError' and 'int'
Hello,  My Splunk query an API and gets a JSON answer. Here is a sample for 1 Host (the JSON answer is very long ≈ 400 hosts) :  {     "hosts": [         {             "hostInfo": {          ... See more...
Hello,  My Splunk query an API and gets a JSON answer. Here is a sample for 1 Host (the JSON answer is very long ≈ 400 hosts) :  {     "hosts": [         {             "hostInfo": {              "displayName": "host1.fr"                                    },              "modules": [                 {                     "moduleType": "JAVA",                     "instances": [                         {                             "Instance Name": "Test1",                             "moduleVersion": "1.0"                         },                         {                             "Instance Name": "Test2",                             "moduleVersion": "1.1"                         },                         {                             "Instance Name": "Test3",                             "moduleVersion": "1.2"                         }                       ]                   }               ]           }        ] }   First-of-all I have to manually parse this JSON because SPLUNK automatically gets the 1st fields of the 1st host only.   With this following search, I manually parse this JSON all the way through the "instances{}" array and I count the number of moduleVersion : index="supervision_software" source="API" earliest=-1m | spath path=hosts{}.modules{}.instances{} output=host | fields - _raw | mvexpand host | spath input=host | stats count(moduleVersion) It displays a number of 1277 moduleVersion and it is the right number. On the other hand with the next similar search, when I parse the JSON starting only to the 1st array ("hosts{}"), I am getting a different number of moduleVersion  : index="supervision_software" source="API" earliest=-1m | spath path=hosts{} output=host | fields - _raw | mvexpand host | spath input=host | stats count(modules{}.instances{}.moduleVersion) It displays a number of 488 moduleVersion but it's incorrect. Why is there a difference ? Thank you. Best regards,
I'm trying to figure out how to show uptime percent of a device in percentage over 30 days that is agnostic to both linux and windows data.   I am currently using index=os sourcetype=Unix:Uptime ... See more...
I'm trying to figure out how to show uptime percent of a device in percentage over 30 days that is agnostic to both linux and windows data.   I am currently using index=os sourcetype=Unix:Uptime as my data set, and it's a default data set that ships with the Linux TA.  for windows I am using this search: index=wineventlog LogName=System EventCode=6013 |rex field=Message "uptime is (?<uptime>\d+) seconds" | eval Uptime_Minutes=uptime/60 | eval LastBoot=_time-uptime | convert ctime(LastBoot) | eval uptime=tostring(uptime, "duration") | stats latest(_time) as time by host, Message, uptime, LastBoot   Currently, I can't figure out how to account for a reboot that occurs during the month.  The linux data doesn't have a 'LastBoot' field like the windows data, and I'm not sure how to create one.  This is the closest that I've gotten is to use something like this for either linux or windows, and simply rename / create the 'uptime' field in seconds.  index=nix sourcetype=Unix:Uptime | rename SystemUpTime as uptime | streamstats sum(uptime) as total by host | eval tot_up=(total/157697280)*100 | eval host_uptime=floor(tot_up) | stats max(host_uptime) as pctUp by host This is obviously crude, and I'm trying to refine it though i'm looking for any help. I'm obviously missing something, and i'm sure i'm not the first person to ask a question like this though I couldn't find anything specific to this on answers.  I have a search that shows me total uptime in duration for either windows or linux, and that's great!  I'm just looking for the total uptime in percent over a 30 days span that accounts for reboots, or legitimate system hard down incidents. 
We've gotten a search to work that shows the delta between the number of messages in an inbox for a period of time:   <basesearch> | bin _time span=5m | stats max(Items) AS Max by _time | delta Max... See more...
We've gotten a search to work that shows the delta between the number of messages in an inbox for a period of time:   <basesearch> | bin _time span=5m | stats max(Items) AS Max by _time | delta Max as Delta | fields _time Delta | where Delta>=10   But I want to do this based on multiple inboxes, and delta is merging the inboxes together, so the values of each inbox are interfering with each other.   <basesearch multiple mailboxes> | bin _time span=5m | stats max(Items) AS Max by _time User | delta Max as Delta | fields _time Delta User   returns: _time User Max Delta 09:15 user1 103   09:15 user2 251 148 09:15 user3 17 -234 and I want the users to be treated as individual accounts, not merged with each other.  I assume I need to use streamstats for this, but so far I've been unable to work out how.
Learning about joins and sub searches. What's the following query executing and would there be a way to make it more efficient? index=old_indexstats count values(d) as d by username | join type=in... See more...
Learning about joins and sub searches. What's the following query executing and would there be a way to make it more efficient? index=old_indexstats count values(d) as d by username | join type=inner username [search index=new_index | stats count by username ] I believe it starts by searching and counting usernames in the new index however, am getting mixed up after that.