All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Is it possible to perform "left join" lookup from CSV to an index? Usually lookup start with index, then CSV file and the result only produces the correlation (inner join)     index= owner | lo... See more...
Is it possible to perform "left join" lookup from CSV to an index? Usually lookup start with index, then CSV file and the result only produces the correlation (inner join)     index= owner | lookup host.csv ip_address AS ip OUTPUTNEW host, owner     but I am looking for left join that still retains all of the data in host.csv Thank you for your help Please see the example below: host.csv ip_address host 10.1.1.1 host1 10.1.1.2 host2 10.1.1.3 host3 10.1.1.4 host4 index=owner ip host owner 10.1.1.3 host3 owner3 10.1.1.4 host4 owner4 10.1.1.5 host5 owner5   left join "lookup" (expected output) - yellow and green circle (see drawing below) ip_address host owner 10.1.1.1 host1   10.1.1.2 host2   10.1.1.3 host3 owner3 10.1.1.4 host4 owner4 normal "inner join" lookup   index first, then CSV - green circle ip_address host owner 10.1.1.3 host3 owner3 10.1.1.4 host4 owner4      
Can you give an examples for both options?? 1) I am not sure what you meant by refactor and move long position into inputlookup command and search macro 2) not sure how to make "bookmarklet" Tha... See more...
Can you give an examples for both options?? 1) I am not sure what you meant by refactor and move long position into inputlookup command and search macro 2) not sure how to make "bookmarklet" Thanks
Hi @Poojitha try using "unset" in the country dropdown, so that whenever it is changed, the state dropdown will first reset before populating. E.g. in the country dropdown xml:   <change> ... See more...
Hi @Poojitha try using "unset" in the country dropdown, so that whenever it is changed, the state dropdown will first reset before populating. E.g. in the country dropdown xml:   <change> <unset token="state"></unset> </change>    
Hi @varshini_3141 there is currently no way to do this out of the box. You'd need to use custom HTML/CSS, or place text boxes on the map in dashboard studio to show the static values.
Hi @JuanPerez are you able to provide some more detail here? Are you wanting to get the Splunk health status of indexers and the indexes they contain, or is it some other health status you want to mo... See more...
Hi @JuanPerez are you able to provide some more detail here? Are you wanting to get the Splunk health status of indexers and the indexes they contain, or is it some other health status you want to monitor, e.g. based on metrics logs?
Hi @AliDodd the first thing to check is whether the scheduled searches are being skipped or failed. You can check this from the job manager or the splunk health dashboard. If so, check the errors in ... See more...
Hi @AliDodd the first thing to check is whether the scheduled searches are being skipped or failed. You can check this from the job manager or the splunk health dashboard. If so, check the errors in the search.log and scheduler.log files.  If you still can't find the issue, test the collect command is sending data correctly to the index using a quick makeresults command, e.g. (Assuming there is no problem sending a dummy event to your production index!) | makeresults | eval test="test" | collect index="ldap_ad"  
Hi @chrislkt  | tstats only searches the indexed fields in tsidx files. All fields searched with tstats must exist in the tsidx files. Most likely the field from one of the sides of your OR conditi... See more...
Hi @chrislkt  | tstats only searches the indexed fields in tsidx files. All fields searched with tstats must exist in the tsidx files. Most likely the field from one of the sides of your OR condition cannot be found within the time range specified, so tstats can't evaluate the OR condition properly because of the non-existent field, leading to 0 results.  To get around this, you can just append another tstats command to replace the other side of the OR condition, e.g.   | tstats count where index="my_index" eventOrigin="api" accountId="8674756857" | append [| tstats count where index="my_index" eventOrigin="api" serviceType="unmanaged"]  
Hi @fredclown yes it's possible to have the HF still retain the ability to search its own locally indexed data, you just need to set indexAndForward = true in its outputs.conf  e.g. [tcpout] defaul... See more...
Hi @fredclown yes it's possible to have the HF still retain the ability to search its own locally indexed data, you just need to set indexAndForward = true in its outputs.conf  e.g. [tcpout] defaultGroup = new_indexers [tcpout:new_indexers] server = new_indexer1:9997, new_indexer2:9997 indexAndForward = true Keep in mind that, depending on the size of the indexes stored locally, this could result in significantly more resource utilization on the HF, so thorough testing is highly recommended. 
Hi @SplunkerNoob  Assuming these are external URLs (not other dashboards/searches within Splunk), you can add the trusted domains to the drilldownUrlWhitelist setting in the web.conf file.  @... See more...
Hi @SplunkerNoob  Assuming these are external URLs (not other dashboards/searches within Splunk), you can add the trusted domains to the drilldownUrlWhitelist setting in the web.conf file.  @SplunkerNoob wrote: I think it is working but unfortunately I get: The URL you clicked cannot open as it is invalid and might contain malicious code. Change the URL to a relative or absolute URL, such as /app/search/datasets or https://www.splunk.com.
1. Does the "card" index exist? 2. Is any data at all being ingested to that index? 3. Are there any parsing or connectivity issues in the _internal index?
You can use the rest API to find what you need | rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/entity report_as=text | eval value=spath(value,"{}") | mvexpand value | eval en... See more...
You can use the rest API to find what you need | rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/entity report_as=text | eval value=spath(value,"{}") | mvexpand value | eval entity_id=spath(value, "_key"), entity_title=spath(value, "title"), entity_name=spath(value, "identifying_name"), retired=spath(value, "retired"), mod_time=spath(value, "mod_timestamp") | search retired=1 | eval epoch_time=strptime(mod_time,"%Y-%m-%dT%H:%M:%S.%6Q") | eval mod_time=mod_time." UTC" | eval date_retired=strptime(mod_time,"%Y-%m-%dT%H:%M:%S.%6Q+00:00 %Z") | convert ctime(date_retired) | fields entity_id entity_name date_retired
Hello friends, I am trying to create a heat map where I can see the indexes on the left side and in each cell of the heat map see the host with its health color, to make monitoring more practical, bu... See more...
Hello friends, I am trying to create a heat map where I can see the indexes on the left side and in each cell of the heat map see the host with its health color, to make monitoring more practical, but I can't get it to work. Has anyone attempted this search and can help me...
Try something like this | tstats count max(_time) AS latest_event_time where index=firewall sourcetype="cisco:ftd" [| inputlookup Firewall_list.csv | table Primary | Rename Primary AS host] OR [| in... See more...
Try something like this | tstats count max(_time) AS latest_event_time where index=firewall sourcetype="cisco:ftd" [| inputlookup Firewall_list.csv | table Primary | Rename Primary AS host] OR [| inputlookup Firewall_list.csv | table Primary | Rename Secondary AS host] groupby host ``` append host (Primary) and Primary for all primaries ``` | append [|inputlookup Firewall_list.csv | table Primary | eval host=Primary | eval count=0] ``` append host (Secondary) and Primary for all secondaries ``` | append [|inputlookup Firewall_list.csv | rename Secondary as host | eval count=0] ``` count for all hosts noting last event time and Primary ``` | stats sum(count) as count max(latest_event_time) AS latest_event_time values(Primary) as Primary by host ``` find all host not reporting ``` | where count = 0 ``` count hosts for each Primary not reporting ``` | eventstats count as hosts_not_reporting by Primary ``` find where both hosts are not reporting ``` | where hosts_not_reporting = 2
This search should give you a start on what you need | rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/service report_as=text filter="{\"enabled\":1}" | eval services_as_json=spa... See more...
This search should give you a start on what you need | rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/service report_as=text filter="{\"enabled\":1}" | eval services_as_json=spath(value,"{}") | fields services_as_json | mvexpand services_as_json | eval kpis_as_json=spath(services_as_json, "kpis{}") | fields - services_as_json | mvexpand kpis_as_json | spath input=kpis_as_json | fields - kpis_as_json | rename key as kpiid | search service_title!="ServiceHealthScore" | eval search = if(isnotnull(base_search_id),"",base_search) | search "aggregate_thresholds.thresholdLevels{}.severityLabel"!="" "aggregate_thresholds.thresholdLevels{}.thresholdValue"!="" | rename service_title as Service "aggregate_thresholds.baseSeverityLabel" as "Base Threshold" "aggregate_thresholds.thresholdLevels{}.severityLabel" as "Thresholds" "aggregate_thresholds.thresholdLevels{}.thresholdValue" as "Threshold Values" title as KPI description as Description unit as Unit urgency as "Importance Score" | table Service KPI Description "Base Threshold" Thresholds "Threshold Values" "Importance Score" | join type=outer Service [| inputlookup itsi_entities | fields services._key title | rename services._key as services title as host | mvexpand services | lookup service_kpi_lookup _key as services | stats list(host) as host by title | eval host=mvjoin(host, ",") | rename title as Service]
If the values you need are service info fields you could use a search like this to find them just Replace <service_title> with the services you want to clone Replace <info_field> with any service ... See more...
If the values you need are service info fields you could use a search like this to find them just Replace <service_title> with the services you want to clone Replace <info_field> with any service info fields you need to use | getservice | search title IN ("<service_title>*","<service_title>*") | fillnull value="none" services_depends_on base_service_template_id | fields title services_depends_on base_service_template_id | rex field=services_depends_on "serviceid=(?<serviceid>.*)~~~" | fillnull value="none" serviceid | mvexpand serviceid | join type=outer serviceid [| `service_kpi_list` | fields serviceid service_name] | stats list(service_name) as dependent_services by title base_service_template_id | eval dependent_services=mvjoin(dependent_services, ",") | rename title as service_name base_service_template_id as template_id | join type=outer template_id [| rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/base_service_template report_as=text | eval value=spath(value,"{}") | mvexpand value | eval info_fields=spath(value,"informational.fields{}"), template_id=spath(value, "_key"), template_name=spath(value, "title") | fields template_id template_name] | join type=outer service_name [| inputlookup itsi_entities | fields services._key title | rename services._key as services title as host | mvexpand services | lookup service_kpi_lookup _key as services | stats list(host) as host by title | eval host=mvjoin(host, ",") | rename title as service_name] | makemv delim="," host | mvexpand host | join type=outer host [| rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/entity report_as=text | eval value=spath(value,"{}") | mvexpand value | eval info_fields=spath(value,"informational.fields{}"), entity_id=spath(value, "_key"), entity_title=spath(value, "title"), entity_name=spath(value, "identifying_name") | appendpipe [| where isnull(field_type) | mvexpand info_fields | eval field_value = spath(value,info_fields."{}"), field_type="info" | rename info_fields as field_name ] | where field_name IN ("<info_field>","<info_field>","<info_field>","<info_field>") | stats list(field_value) as field_value by field_name entity_name | eval field_value=mvjoin(field_value,",") | eval {field_name}=field_value | stats latest(<info_field>) as <info_field> latest(<info_field>) as <info_field> latest(<info_field>) as <info_field> by entity_name | rename entity_name as host] | fields - template_id
We have a legacy all-in-one Splunk server. We have built out a new distributed environment that has an indexer cluster. We are phasing everything over to the new environment. As such the new search h... See more...
We have a legacy all-in-one Splunk server. We have built out a new distributed environment that has an indexer cluster. We are phasing everything over to the new environment. As such the new search head cluster can search both the new indexers and the old all-in-one server. We plan on migrating all the UFs to start sending their logs to the new indexers soon. I have also set up the old server to be able to search the new indexers so that alerts and reports on that server will still continue to work once the inputs are sent to the new indexers. That way we can handle report migration separately and not have to do everything in one fell swoop. I was also planning on changing the old indexer into a heavy forwarder once the UFs are repointed so that any scripted inputs it has or other kinds of inputs will be sent to the new indexers. My question is if I flip it to be a heavy forwarder I know any new inputs it gets will get sent to the new indexers which should be find for alerts and reports as it can also search the new indexers. However all the existing stored logs I know those will not be migrated into the new indexers but will the old server still be able to search those logs once it is a heavy forwarder? Will it still search it's own locally indexed data as well as the new indexers? Thanks.
A search like this will also give you an output that would allow you to dump a csv or practically clone a service tree. Replace <service_title> with the services you want to clone Replace <info_fi... See more...
A search like this will also give you an output that would allow you to dump a csv or practically clone a service tree. Replace <service_title> with the services you want to clone Replace <info_field> with any service info fields you need to use Replace <old> and <new> at the end to make new service names Use the "Create Service" Import from Search option with this search to make a clone of your service tree. | getservice | search title IN ("<service_title>*","<service_title>*") | fillnull value="none" services_depends_on base_service_template_id | fields title services_depends_on base_service_template_id | rex field=services_depends_on "serviceid=(?<serviceid>.*)~~~" | fillnull value="none" serviceid | mvexpand serviceid | join type=outer serviceid [| `service_kpi_list` | fields serviceid service_name] | stats list(service_name) as dependent_services by title base_service_template_id | eval dependent_services=mvjoin(dependent_services, ",") | rename title as service_name base_service_template_id as template_id | join type=outer template_id [| rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/base_service_template report_as=text | eval value=spath(value,"{}") | mvexpand value | eval info_fields=spath(value,"informational.fields{}"), template_id=spath(value, "_key"), template_name=spath(value, "title") | fields template_id template_name] | join type=outer service_name [| inputlookup itsi_entities | fields services._key title | rename services._key as services title as host | mvexpand services | lookup service_kpi_lookup _key as services | stats list(host) as host by title | eval host=mvjoin(host, ",") | rename title as service_name] | makemv delim="," host | mvexpand host | join type=outer host [| rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/entity report_as=text | eval value=spath(value,"{}") | mvexpand value | eval info_fields=spath(value,"informational.fields{}"), entity_id=spath(value, "_key"), entity_title=spath(value, "title"), entity_name=spath(value, "identifying_name") | appendpipe [| where isnull(field_type) | mvexpand info_fields | eval field_value = spath(value,info_fields."{}"), field_type="info" | rename info_fields as field_name ] | where field_name IN ("<info_field>","<info_field>","<info_field>","<info_field>") | stats list(field_value) as field_value by field_name entity_name | eval field_value=mvjoin(field_value,",") | eval {field_name}=field_value | stats latest(<info_field>) as <info_field> latest(<info_field>) as <info_field> latest(<info_field>) as <info_field> by entity_name | rename entity_name as host] | fields - template_id | eval service_name=replace(service_name,"<old>","<new>"),dependent_services=replace(dependent_services,"<old>","<new>")
A search like this will also give you an output that would allow you to practically clone a service tree Replace <service_title> with the services you want to clone Replace <info_field> with any s... See more...
A search like this will also give you an output that would allow you to practically clone a service tree Replace <service_title> with the services you want to clone Replace <info_field> with any service info fields you need to use Replace <old> and <new> at the end to make new service names Use the "Create Service" Import from Search option with this search to make a clone of your service tree. | getservice | search title IN ("<service_title>*","<service_title>*") | fillnull value="none" services_depends_on base_service_template_id | fields title services_depends_on base_service_template_id | rex field=services_depends_on "serviceid=(?<serviceid>.*)~~~" | fillnull value="none" serviceid | mvexpand serviceid | join type=outer serviceid [| `service_kpi_list` | fields serviceid service_name] | stats list(service_name) as dependent_services by title base_service_template_id | eval dependent_services=mvjoin(dependent_services, ",") | rename title as service_name base_service_template_id as template_id | join type=outer template_id [| rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/base_service_template report_as=text | eval value=spath(value,"{}") | mvexpand value | eval info_fields=spath(value,"informational.fields{}"), template_id=spath(value, "_key"), template_name=spath(value, "title") | fields template_id template_name] | join type=outer service_name [| inputlookup itsi_entities | fields services._key title | rename services._key as services title as host | mvexpand services | lookup service_kpi_lookup _key as services | stats list(host) as host by title | eval host=mvjoin(host, ",") | rename title as service_name] | makemv delim="," host | mvexpand host | join type=outer host [| rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/entity report_as=text | eval value=spath(value,"{}") | mvexpand value | eval info_fields=spath(value,"informational.fields{}"), entity_id=spath(value, "_key"), entity_title=spath(value, "title"), entity_name=spath(value, "identifying_name") | appendpipe [| where isnull(field_type) | mvexpand info_fields | eval field_value = spath(value,info_fields."{}"), field_type="info" | rename info_fields as field_name ] | where field_name IN ("<info_field>","<info_field>","<info_field>","<info_field>") | stats list(field_value) as field_value by field_name entity_name | eval field_value=mvjoin(field_value,",") | eval {field_name}=field_value | stats latest(<info_field>) as <info_field> latest(<info_field>) as <info_field> latest(<info_field>) as <info_field> by entity_name | rename entity_name as host] | fields - template_id | eval service_name=replace(service_name,"<old>","<new>"),dependent_services=replace(dependent_services,"<old>","<new>")  
That's job for eventstats. <your search> | eventstats first(memberID) as memberID by sessionID  
For some reason my |tstats count query is returning a result of 0 when I add an OR condition in my where clause if the field doesn't exist in the dataset, or if the OR condition specifies a string va... See more...
For some reason my |tstats count query is returning a result of 0 when I add an OR condition in my where clause if the field doesn't exist in the dataset, or if the OR condition specifies a string value when the value for the field in the data is always an integer. For example: This query returns the correct event count (or at least it's non-zero):   |tstats count where index="my_index" eventOrigin="api" (accountId="8674756857")     Adding this OR condition returns a count of zero -- why? Note that for this time range there are no events with a serviceType field, but for other time ranges there are events with a serviceType field.   |tstats count where index="my_index" eventOrigin="api" (accountId="8674756857" OR serviceType="unmanaged")     Adding this OR condition also returns zero -- why? It's true that accountId should normally be an integer, but it's an OR, so I still expect it to count those events.   |tstats count where index="my_index" eventOrigin="api" (accountId="19783038942" OR accountId="aaa")     Using a * results in the same non-zero count as the first query, which is expected, even though there are no events with a serviceType field:   |tstats count where index="my_index" eventOrigin="api" (accountId="8674756857" OR serviceType="unmana*")     Why would adding an OR condition in tstats cause the count to be zero? The same problem does not occur with a regular search query. I am on Splunk 9.1.0.2.