All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a search where 2 of the fields returned are based on the following JSON structure: "tags": [         {             "key": "My Key to Search For",             "value": "The value I want to... See more...
I have a search where 2 of the fields returned are based on the following JSON structure: "tags": [         {             "key": "My Key to Search For",             "value": "The value I want to see",         },         {             "key": "Some other key",             "value": "some value",         }]   I can get the data in a table, eg:     |table asset,tags{}.key,tags{}.value   In my search this will list all my assets, each with their respective tag keys and values as lists in their own fields.  asset tags{}.key tags{}.value asset_001 [TAG_001, TAG_002] [VALUE_001, VALUE002] asset_002 [TAG_001] [VALUE_001]   I now want to create a new field based on these tags, where:   mynewfield = tags{}.value where tags{}.key = "My Key to Search For"   so that: asset mynewfield asset_001 VALUE_002 asset_002 NONE   I tried using eval and mvfilter but I cannot seem to get the statements right, and I'm sure I'm missing something.  Can anyone shed some light on how to do this in a Splunk search?   
In our DMZ we have UFs installed on Windows/Linux hosts.  They forward events to an intermediate heavy forwarder in the DMZ w/doubles as a deployment server and Stream app server.  I've pushed out th... See more...
In our DMZ we have UFs installed on Windows/Linux hosts.  They forward events to an intermediate heavy forwarder in the DMZ w/doubles as a deployment server and Stream app server.  I've pushed out the Splunk_TA_stream to the UFs with the correct intermediate heavy forwarder as the Stream server; however, I'm not seeing any of the UFs.  I suspect its due to restrictions on the firewall between the different DMZ zones. What ports need to be open between the UFs Splunk_TA_stream and the Splunk stream server?  I also assume  that once it's configure there won't be an issue with routing through an intermediate relay heavy forwarder..Right?  And finally, is there a way to manually configure the Splunk_TA_stream add-on and not use the Splunk Stream app?
Hello All, I am looking to use customize time picker where user can select only latest date/time and earliest time will always -1 min from selected time. I don't want to change default time picker. ... See more...
Hello All, I am looking to use customize time picker where user can select only latest date/time and earliest time will always -1 min from selected time. I don't want to change default time picker. This will be secondary time picker for specific dashboard.Not sure if this is possible. Thanks
We have set-up the Jenkins Plugin for Splunk, but not all of the Jenkins Nodes are showing up in the dropdown "Jenkins Nodes" in the Build Analysis dashboard. We did the standard install, so not sure... See more...
We have set-up the Jenkins Plugin for Splunk, but not all of the Jenkins Nodes are showing up in the dropdown "Jenkins Nodes" in the Build Analysis dashboard. We did the standard install, so not sure if there is some setting that needs to be changed in Jenkins to push all the Jenkins Nodes into Splunk.  Has anyone else experienced this and been able to fix it?  Thanks!
Hi, I'm bouncing my head against the wall for this (probably) simple question..   I've got a inputlookup "indexers". As the name says.. those are the splunk indexers, but will be more than that in... See more...
Hi, I'm bouncing my head against the wall for this (probably) simple question..   I've got a inputlookup "indexers". As the name says.. those are the splunk indexers, but will be more than that in the future. I want to get disc sizes off them with the below serach |inputlookup indexers | fields host | stats count by host |map search="search (| rest splunk_server=$host$ /services/server/status/partitions-space]") It all goes well until the map command. The stats gives a nice list off the servers. It goes wrong at the "search (| rest splunk_server=$host$ /services/server/status/partitions-space]"   part. When i try this part off the search.. it strips the | from the search.. and gives nothing. It seems a search command followed with a | will strip the | .. and then de rest search is useless. What can i do to pass the hostnames from the inputlookup to the |rest search?   Thanx in advance   grts Jari        
Hi, I'm having random scheduled searches being missed (not skipped) and I don't know why.  Below is a sample of an every 5 min search. The 7:05 run was not running. I can't find anything in the log... See more...
Hi, I'm having random scheduled searches being missed (not skipped) and I don't know why.  Below is a sample of an every 5 min search. The 7:05 run was not running. I can't find anything in the logs that tell me why. Any suggestions on what to look for and where? This is just from _internal scheduler scheduled_time dispatch_time endTime run_time status 2021-02-24 07:00:00 2021-02-24 07:00:37 2021-02-24 07:00:44 5.624 success 2021-02-24 07:00:00 2021-02-24 07:00:44 delegated_remote_completion 2021-02-24 07:00:00 2021-02-24 07:00:37 delegated_remote 2021-02-24 07:10:00 2021-02-24 07:11:31 2021-02-24 07:11:40 4.351 success 2021-02-24 07:10:00 2021-02-24 07:11:37 delegated_remote_completion 2021-02-24 07:10:00 2021-02-24 07:11:31 delegated_remote Thank you!   Chris 
By my ex-post, https://community.splunk.com/t5/Dashboards-Visualizations/how-draw-a-line-connects-from-plot-to-plot-on-scatter-chart/m-p/539982, this solution solved my ex-issue and now I would like ... See more...
By my ex-post, https://community.splunk.com/t5/Dashboards-Visualizations/how-draw-a-line-connects-from-plot-to-plot-on-scatter-chart/m-p/539982, this solution solved my ex-issue and now I would like to expand this usage to draw multiple lines on one graph area. To do so, what I am doing now is try to analyze sports data, let's say "Tennis", and that original data has a serve info, a point where a player hit a serve and a land point where a ball landed. When I have this kind data, I could draw each points by using scatter charts, then I would like to draw a line from a serve hit point to a land point for each serve. I created a dummy date like below and tried to use the solution however it does not work well. Not work point: 1. Duplicated data, such as a serve hit point, does not appear 2. Cannot draw lines into a one graph area(map?) How I can create a graph like below on a Dashboard from below sample data? Please advise. | makeresults | eval _raw="serve,xpoint,ypoint first-serve,4.5,-0.3 first-serve_land,3.8,15.6 second-serve,4.5,-0.3 second-serve_land,3.6,16.0 third-serve,4.6,-0.2 third-serve_land,3.8,16.3 fourth-serve,4.7,-0.4 fourth-serve_land,3.9,16.5 fifth-serve,4.6,-0.5 fifth-serve_land,4.0,16.9" | multikv forceheader=1 | table serve,xpoint,ypoint 
incoming/d0000c00002/data_reuse/d000/d0000c00002/ar/shared/sdtm/prod/data/idap_20191011/dm.sas7bdat   what I need is to extract only d0000c00002 before data _reuse
Hi, I would like to filter a dashboard by using a Dropdown Input at the top of my dashboard. By selecting one of the dropdown values (Total, Data A, Data B) the charts and tables should only show va... See more...
Hi, I would like to filter a dashboard by using a Dropdown Input at the top of my dashboard. By selecting one of the dropdown values (Total, Data A, Data B) the charts and tables should only show values of the chosen dataset. Data A and Data B can be distinguished by field12, that consists of numbers. For Data A field12 always starts with 1 or 2. For  Data B field12 always starts with a 7. For Total it doesnt matter, because total should contain all data. Can somebody tell me what I need to add to the Search of my tables and charts and what I need to do with the tokens?
Here is the requirement: I wanted to create a form with list of Apps in my Search head Dropdown. If the Developer choose any App from the list then it should show what level of permission (Read / Wr... See more...
Here is the requirement: I wanted to create a form with list of Apps in my Search head Dropdown. If the Developer choose any App from the list then it should show what level of permission (Read / Write) to whom in the dashboard. Is the App metadata writing this information anywhere in the logs. Or can we get this via REST API Search?   Sample Output App Permission Dashboard Examples READ - * ; WRITE - POWER        
Hi I have installed the Splunk Deep Learning Toolkit including the necessary dependencies on a VMWare workstation Windows 10 machine. I am following this tutorial: https://youtu.be/IYCvwABLyh4  ... See more...
Hi I have installed the Splunk Deep Learning Toolkit including the necessary dependencies on a VMWare workstation Windows 10 machine. I am following this tutorial: https://youtu.be/IYCvwABLyh4  and got to the point in which I need to utilize the Neural Network Classifier Example.   The container is running, everything seems OK. Then I go to Examples->Classifier->Neural Network Classifier Example, choose 10 epochs and click SUBMIT. In the graphs placeholders I get the message: "Error in 'fit' command: Error while initializing algorithm "MLTKContainer": local variable 'url' referenced before assignment". My C:\Program Files\Splunk\etc\apps\mltk-container\local\containers.conf contents are:   [default] [__dev__] api_url = http://localhost:49154 cluster = docker id = 133*****0b610 image = mltk-container-tf-cpu jupyter_url = http://localhost:8888 runtime = None spark_url = http://localhost:4040 tensorboard_url = http://localhost:6006     Did I miss anything? the selected docker is "deprecated" but its the closest to what is shown in the tutorial (I dont have a GPU engine on the VM). HELPPP
Hi All,  I would like to ask on why does out heavy forwarder are consistently restarting whenever this log show up. splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_sys... See more...
Hi All,  I would like to ask on why does out heavy forwarder are consistently restarting whenever this log show up. splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd Currently it consumes 13 GB of Physical Memory What does it do? What is it for? and How can we solve this issue.   
Situation I am trying to parse events with an unrestricted number of key value pairs  that might also include empty values at some places. I would like to extract the part between the closing parent... See more...
Situation I am trying to parse events with an unrestricted number of key value pairs  that might also include empty values at some places. I would like to extract the part between the closing parenthesis and opening square bracket as the field name without spaces (but don't want them replaced by underscores) This is an example of such data:   2021-02-24 10:02:31 Local0 Info 10:02:31:346 VARC-DCM-01.ad.maastro.nl MAASTRO\VARC-DCM-01$|80012|DICOM Service VARC_DCM_SCP_SVC_Export 021556/Export requested for object with key: (0008,0008) Image Type [DERIVED] | (0008,0016) SOP Class UID [1.2.840.10008.5.1.4.1.1.481.1] | (0008,0022) Acquisition Date [20210223] | (0008,0023) Content Date [20210223] | (0008,0032) Acquisition Time [184740.207] | (0008,0033) Content Time [184740.208] | (0008,1150) Referenced SOP Class UID [1.2.840.10008.5.1.4.1.1.481.5] | (0020,0013) Instance Number [1] | (300C,0002) Referenced RT Plan Sequence [Mergecom.MCitem] | (300C,0006) Referenced Beam Number [1] | (300E,0002) Approval Status []   Working solution using SPL Using this SPL expression (inspired by the example in this question on multiple field extraction) :   | eval backup=_raw | rex max_match=0 mode=sed "s/(?:(?:\s\|)?\s)\((?<g>[\da-fA-F]{4}),(?<e>[\da-fA-F]{4})\)\s+(?<k>(?:\w+(?:\s*))+)\[(?<v>[^\]]*)\]/\3=\"\4\",/g" | rex mode=sed "s/\s//g" | extract pairdelim=":," kvdelim="=" | rename backup AS _raw   I am able to translate this to my desired outcome: Image Type Derived SOP Class UID 1.2.840.10008.5.1.4.1.1.481.1     Referenced Beam Number 1 Approval Status     Example in SPL (for testing) Here is a working example to help with testing:   | makeresults | eval _raw="2021-02-24 10:02:31 Local0 Info 10:02:31:346 VARC-DCM-01.ad.maastro.nl MAASTRO\VARC-DCM-01$|80012|DICOM Service VARC_DCM_SCP_SVC_Export 021556/Export requested for object with key: (0008,0008) Image Type [DERIVED] | (0008,0016) SOP Class UID [1.2.840.10008.5.1.4.1.1.481.1] | (0008,0022) Acquisition Date [20210223] | (0008,0023) Content Date [20210223] | (0008,0032) Acquisition Time [184740.207] | (0008,0033) Content Time [184740.208] | (0008,1150) Referenced SOP Class UID [1.2.840.10008.5.1.4.1.1.481.5] | (0020,0013) Instance Number [1] | (300C,0002) Referenced RT Plan Sequence [Mergecom.MCitem] | (300C,0006) Referenced Beam Number [1] | (300E,0002) Approval Status []" | eval backup=_raw | rex max_match=0 mode=sed "s/(?:(?:\s\|)?\s)\((?<g>[\da-fA-F]{4}),(?<e>[\da-fA-F]{4})\)\s+(?<k>(?:\w+(?:\s*))+)\[(?<v>[^\]]*)\]/\3=\"\4\",/g" | rex mode=sed "s/\s//g" | extract pairdelim=":," kvdelim="=" | rename backup AS _raw   Question Now I would like to transfer this to configuration files but I am unsure what to add where.  I am guessing the regular expression goes in to tokenizer.conf based on this post but not sure when combined with the sed command. Normally SED commands I would put the SED commands into the transforms.conf file but how do I prevent them from applying to all evens? The events like the one processed in the example is only a subset of the events in the index and sourcetypes in there. The pairdelim and kvdelim are overrides to the default ones from the sourcetype configuration, not sure where to put this either. Can someone guide me here? Is there some sort of sequence I can configure like the one in SPL to apply to  specific events? How would I go about filtering out these events?
{"timestamp":"2021-02-24T00:00:46.533+00:00","message":"Snapshot event published: SnapshotEvent(status=CREATED, version=SnapshotVersion(sourceSystem=zvkk, source=sdp/deposits/zvkk/2021-02-24/NextBusi... See more...
{"timestamp":"2021-02-24T00:00:46.533+00:00","message":"Snapshot event published: SnapshotEvent(status=CREATED, version=SnapshotVersion(sourceSystem=zvkk, source=sdp/deposits/zvkk/2021-02-24/NextBusinessDays/Snapshot1, entityType=NEXT_BUSINESS_DAYS, date=2021-02-24, version=1, snapshotSize=5, uuid=8683aa33-3a6c-4087-9cdd-3084d8e70147, holiday=false))","component":"com.db.sdda.dc.kafka.service.SnapshotEventNotifyService","thread":"scheduling-1","level":"INFO"}   {"timestamp":"2021-02-23T20:56:37.797+00:00","message":"Snapshot event published: SnapshotEvent(status=CREATED, version=SnapshotVersion(sourceSystem=IDMS-0781, source=sdp/deposits/IDMS-0781/2021-02-23/FacilityLimit/Snapshot1, entityType=FACILITY, date=2021-02-23, version=1, snapshotSize=15168, uuid=016cc1ad-8c27-4144-a9d2-c0233cc1e450, holiday=false))","component":"com.db.sdda.dc.kafka.service.SnapshotEventNotifyService","thread":"scheduling-1","level":"INFO"}   Used command below  |rex field=_raw "sourceSystem=(?<So1>\w+[-]\w+)"    --> Able to get IDMS-0781 as a output but unable to get single word branch like zvkk  |rex field=_raw "sourceSystem=(?<So2>\w+)" Problem Statement 1.I would like to extract sourcesystem in way everything before comma (SourceSystem=IDMS-0781,) 2.Or both with hyphen and without hyphen should pick by rex command   
Hi, I want to create a new field which will simply pull out the first x number of characters from a line on an event log. I am not sure of the regex to use as I assume that's the option to go for? A... See more...
Hi, I want to create a new field which will simply pull out the first x number of characters from a line on an event log. I am not sure of the regex to use as I assume that's the option to go for? As per the image this log brings back an initial datetime stamp followed by certain text (which is what I am searching on). If I can get this in a specific field it will help for amending my query, it's output and the linked alert which uses this.     
Hi, what is the best way to: keep a variable in a single playbook (e.g. a counter that is needed only in one single run of a playbook that I want to increase following a particular logic)? keep a... See more...
Hi, what is the best way to: keep a variable in a single playbook (e.g. a counter that is needed only in one single run of a playbook that I want to increase following a particular logic)? keep a variable cross-playbooks (e.g. a counter that I need to update across several runs of a plyabook)? Currently I am using Custom Lists and I store these variables in the rows but it makes me to have manually reset to 0 the variables when it is needed; to create for cycles to cycle over some action/blocks within a playbook? Thank you in advance
data: { [-]      DESC: Documentation for subsetted study data for iDAP Request INT-20200527-421      DE_IDENTIFICATION_DATE: 2020-07-16      EXCLUDED_COUNTRIES: null      ID: 4849      IS_OBSOLE... See more...
data: { [-]      DESC: Documentation for subsetted study data for iDAP Request INT-20200527-421      DE_IDENTIFICATION_DATE: 2020-07-16      EXCLUDED_COUNTRIES: null      ID: 4849      IS_OBSOLETE: false      LOCATION: root/data_reuse/d848/d8480c00051/ar/shared/adam/doc/idap_20200716      REMOVED_DUE_TO_COUNTRY_REMOVAL: null      REPORTING_LOCATION_ID: 18495      REUSE_LOCATION_CATEGORY_ID: 2      REUSE_LOCATION_DATA_CATEGORIES: [ [+]      ]} I want the timestamp field to be data.DE_IDENTIFICATION_DATE to set in props.conf INDEXED_EXTRACTIONS = JSON TIMESTAMP_FIELDS = date TIME_FORMAT = %Y%m%d TZ = UTC detect_trailing_nulls = auto SHOULD_LINEMERGE = false description = My source type pulldown_type = true disabled = false KV_MODE = none AUTO_KV_JSON = false TIMESTAMP_FIELDS=DE_IDENTIFICATION_DATE I have given above settings in my props.conf . Please suggest the write way of mentioning the json data value
Is there a way to schedule a playbook run without having any container? Is it possible?
Is there a way to automatically delete some containers within a playbook?
Hi everyone, with Phantom version 4.10.1.45070 and app version 3.0.3 I noticed that the maximum number of emails that I can retrieve with the action "run query" is 1000. Is it correct? Why it isn't d... See more...
Hi everyone, with Phantom version 4.10.1.45070 and app version 3.0.3 I noticed that the maximum number of emails that I can retrieve with the action "run query" is 1000. Is it correct? Why it isn't documented?