All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This query can be further modified into this: index="_internal" source="*metrics.log" per_index_thruput series=* NOT ingest_pipe=* |stats sum(kb) as kb values(host) as host by series however ... See more...
This query can be further modified into this: index="_internal" source="*metrics.log" per_index_thruput series=* NOT ingest_pipe=* |stats sum(kb) as kb values(host) as host by series however this query will also show the amount of KBs being logged into indexes via summary indexing (sourcetype=stash), which is supposed to be not charged. Hence, I would prefer this query: index=_internal type=usage idx IN (*) source="*license_usage.log" NOT (h="" OR h=" ")
In order to get metrics index info also: | rest /services/data/indexes count=0 datatype=all
Hi, How can we normalize MAC addresses (such as XX:XX:XX:XX:XX:XX or XX-XX-XX-XX-XX-XX) in our environment before implementing the asset and identity in splunk ES, as we are collecting data from wor... See more...
Hi, How can we normalize MAC addresses (such as XX:XX:XX:XX:XX:XX or XX-XX-XX-XX-XX-XX) in our environment before implementing the asset and identity in splunk ES, as we are collecting data from workspace.
The most blunt way to implement this would be to use the constraint on ValueE as subsearch to establish search period (earliest, latest).  I will assume that ValueE and all the other 11 values are al... See more...
The most blunt way to implement this would be to use the constraint on ValueE as subsearch to establish search period (earliest, latest).  I will assume that ValueE and all the other 11 values are already extracted by Splunk.  I will call them other_field01, other_field02, etc. Here is an idea if you are only interested in distinct values of these. index=my_index_plant sourcetype=my_sourcetype_plant [index=my_index_plant sourcetype=my_sourcetype_plant Instrument="my_inst_226" ValueE > 20 | stats min(_time) as earliest max(_time) as latest] | stats values(other_field01) as other_field01 values(other_field02) as other_field02, ... values(ValueE) as ValueE by Instrument Hope this helps.  
Thanks, From where can we see all the Hec connections which are onboarded. 
Thanks but how to present that in a bar chart? (to add foo to my bar chart). I can present that only in a table | stats sum(CountEvents) by CT | rename "sum(CountEvents)" as "countE" | eventstat... See more...
Thanks but how to present that in a bar chart? (to add foo to my bar chart). I can present that only in a table | stats sum(CountEvents) by CT | rename "sum(CountEvents)" as "countE" | eventstats sum(countE) as Total | eval perc=round(countE*100/Total,2) | eval foo = countE . "(" . perc ."%" .")" | fields - Total perc
http://...@b6436b649711.1/js/contrib/moment.js it seems moment.js is missing in this location , we verified in splunk 9.1.x+ this internal splunk file seems to be missing...
Hi maybe you could try to get data by adding this into your base search ? earliest=-3mon@w+d latest=@w+d  
yes I have some index and sourcetypes but I don't know how to choose the index and sourcetypes for this ip address Can you confirm this: So you want to know which index/indices, and which sou... See more...
yes I have some index and sourcetypes but I don't know how to choose the index and sourcetypes for this ip address Can you confirm this: So you want to know which index/indices, and which sourcetype(s) contain records of interest.  Is this correct? index=* src=**.**.***.** OR **.**.***.** dest_ip=**.***.***.*** dest_port=443 | stats count by index sourcetype This should give you  a list of index-sourcetype combinations that contain the specific IP and port. (Also, if you can use search command immediately following a search command, the two search commands should be combined into one. (The first command is an implied "search".)
Hi @Helios, the fist question is: what are the hardware resources you have on your server? Splunk requires at least 12 CPUs and 12 GB RAM, usually the issue is related to the CPUs. this seems that ... See more...
Hi @Helios, the fist question is: what are the hardware resources you have on your server? Splunk requires at least 12 CPUs and 12 GB RAM, usually the issue is related to the CPUs. this seems that you don't have sufficient resources (eventually only on some time periods) to run all the scheduled searches and many of them are skipped. So analyze, using the Monitoring Console, the searches, to understand if there's a resource problem or you need only to define a different scheduling for the savedsearches execution. Last check: how many real time searches have you in execution? remember that a Splunk search uses a CPU for each search (more than 1 if you have subsearches) and release them only when the search is finished (never for real time searches!), so if you have two o three real time searches in execution, there's the risk to finish the resources. In this case, schedule the execution of these searches using fixed time periods. Ciao. Giuseppe
Hi @strehb18, if your requirement is to have only the last result and only one event, you could use something like this: <your_search> | join cwo type left [search source=punch index=your_index... See more...
Hi @strehb18, if your requirement is to have only the last result and only one event, you could use something like this: <your_search> | join cwo type left [search source=punch index=your_index | rename work_center as position | sort -_time | head 1 ] Only one hint: the join command is a very slow command and it consumes many resources; there are usually other solutions to replace the join command, e.g. the stats command, but this depends on your use case. Ciao. Giuseppe
Hi when you got error code 500 you obviously got connection to splunkd's http part, but for some reason it didn't work correctly. How you are try to connect and from where? Did this work on HF host... See more...
Hi when you got error code 500 you obviously got connection to splunkd's http part, but for some reason it didn't work correctly. How you are try to connect and from where? Did this work on HF host? curl -vk https://localhost:8000 And how about when you switch localhost to your host real name and/or IP? Are there anything on splunk's internal logs under /opt/splunk/var/log/splunk (access + splunkd) logs? r. Ismo 
Hi @SplunkSN, if you're speaking of alerts on different Splunk servers, the only solution is to have a Search Head Cluster, so only one server will run alerts. If instead you're speaking of alerts ... See more...
Hi @SplunkSN, if you're speaking of alerts on different Splunk servers, the only solution is to have a Search Head Cluster, so only one server will run alerts. If instead you're speaking of alerts on one server and site1 and site2 are different hosts, you have to add this condition, as a filte, in your search. In other words, if there's a condition to test (e.g. a status parameter, also in another search) to test to find the active host, you could run something like this: <your_main_search> [ search <your_host_status_search> | dedup host | fields host ] | ...  Ciao. Giuseppe
Those are stored into _internal index. If you are not part of splunk admin team, you probably haven't access to it. You could try  index=_internal To see if you can see events in that index and if ... See more...
Those are stored into _internal index. If you are not part of splunk admin team, you probably haven't access to it. You could try  index=_internal To see if you can see events in that index and if you can then you can try this index="_internal" component=SavedSplunker sourcetype="scheduler" thread_id="AlertNotifier*" NOT (alert_actions="summary_index" OR alert_actions="") app!=splunk_instrumentation | fields _time app result_count status alert_actions user savedsearch_name splunk_server_group | stats earliest(_time) as _time count as run_cnt sum(result_count) as result_count values(alert_actions) as alert_actions values(splunk_server_group) as splunk_server_group by app, savedsearch_name user status | table _time, run_cnt, app, savedsearch_name user status result_count alert_actions splunk_server_group  It shows alerts which has previously run and what has happen. If you haven't access to internal logs, then you should ask from your Splunk admin team, that they will check what has happened.  
Hi @nithys, the easiest way is to should create a lookup containing the values for Domain, for "Entity associated" and for Data Entity. Then you should create a search on this lookup for Domain. T... See more...
Hi @nithys, the easiest way is to should create a lookup containing the values for Domain, for "Entity associated" and for Data Entity. Then you should create a search on this lookup for Domain. Ten you have to create a search on the lookup for "Entity associated" using the Domain token to filter results. At least you have to create a third search on the lookup using both the other tokens to filter results. If you could share the searches you're using I could be more detailed. Ciao. Giuseppe
Hi @jmrubio, if firewalld is running this could be the issue. Try to disable it (or permit traffic on port 8000) and check if you can access web interface. Ciao. Giuseppe
Hi @nytins, as I said, I don't know PagerDuty and probably the issue is the it doesn't permits multiple values. If you don't have many results, you could create a workaround like the following: c... See more...
Hi @nytins, as I said, I don't know PagerDuty and probably the issue is the it doesn't permits multiple values. If you don't have many results, you could create a workaround like the following: create a lookup (called e.g. PageDuty_temp.csv), save your results in this lookup, create a new alert that: searches on this lookup, takes only the first value, send a message to PagerDuty, removes the used value from the lookup. Ciao. Giuseppe
hi @yuanliu  I appreciate all your responses and I hope the below flowchart makes the requirement clear.     Thank You
Your a real legend @bowesmana . I didn't realize that you can put wildcards in the middle. Thank you so much for your help. I am new to Splunk so your help is really helpful. 
Its KV store.. when i try to add a row its updating the existing row example, instead of this output i am getting name rating comment experience  subject A 3 good 4 math B 4 ver... See more...
Its KV store.. when i try to add a row its updating the existing row example, instead of this output i am getting name rating comment experience  subject A 3 good 4 math B 4 very good 7 science A 5 Excellent  4 math this, name rating comment experience  subject A 5 Excellent 4 math B 4 very good 7 science   I tried these 2 solutions, I thought i dont have write access but i have i can update the  file but not able to add a new row   | inputlookup table_a |search name="A" |eval rating=5 ,comment="Execellent" key=_key| outputlookup append=true key_field=key table_a ----------------------------- | inputlookup table_a |search name="A" |eval rating=5 ,comment="Execellent" | outputlookup append=true table_a