All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Interesting.  I see _internal and non-internal indexes when I run it on one of my sandboxes. What do you see when you run the tstats query alone?
root@armor-index:/opt/splunk/etc/system/local# cat props.conf [armor_json_02] KV_MODE = json     root@armor-uf:/opt/splunkforwarder/etc/apps/armor/local# cat props.conf [armor_json_02] SHOULD_... See more...
root@armor-index:/opt/splunk/etc/system/local# cat props.conf [armor_json_02] KV_MODE = json     root@armor-uf:/opt/splunkforwarder/etc/apps/armor/local# cat props.conf [armor_json_02] SHOULD_LINEMERGE = true LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true CHARSET = UTF-8 #INDEXED_EXTRACTIONS = json KV_MODE = json category = Structured description = JavaScript Object Notation format. For more information, visit http://json.org/ disabled = false pulldown_type = true AUTO_KV_JSON = false     set let me test getting same results  
In my results under the index column, all I get is "_internal".
Ah got it. Unfortunately, there is no way to get that kind of a chart with the built in visualizations.
This should do it.  It just runs both queries and uses the stats command to regroup the results. index=_internal source=*metrics.log group=tcpin_connections | eval sourceHost=if(isnull(hostname), s... See more...
This should do it.  It just runs both queries and uses the stats command to regroup the results. index=_internal source=*metrics.log group=tcpin_connections | eval sourceHost=if(isnull(hostname), sourceHost,hostname) | rename connectionType as connectType | eval connectType=case(fwdType=="uf","univ fwder", fwdType=="lwf", "lightwt fwder",fwdType=="full", "heavy fwder", connectType=="cooked" or connectType=="cookedSSL","Splunk fwder", connectType=="raw" or connectType=="rawSSL","legacy fwder") | eval version=if(isnull(version),"pre 4.2",version) | rename version as Ver | dedup sourceIp | append [ | tstats values(host) as sourceHost where index=* by index | mvexpand sourceHost ] | stats values(*) as * by sourceHost | table connectType, sourceIp, sourceHost, Ver, index  
just set the KV_MODE=JSON on the SH (indexer) and remove the I_E from the forwarder.    
This thread is almost a year old with an accepted solution.  For a better chance at helpful responses, please post a new question.
before I go ahead, correct me if i don't understand correctly.   From the forwarder, props.conf > remove I_E and add KV_MODE = json THEN from the indexer, create same props.conf from above and ... See more...
before I go ahead, correct me if i don't understand correctly.   From the forwarder, props.conf > remove I_E and add KV_MODE = json THEN from the indexer, create same props.conf from above and keep KV_MODE = json OR delete one from forwarder and keep one from the SH (indexer)?
 KV_MODE=JSON is the search time setting it should be on the SH, you can create a new one from the UI to test. 
I have that sourcetype setup on the forwarder side.   on indexer/SH, can't find that specific sourcetype. Should I had to have the props.conf on the indexer too?   IF you mean to update the props... See more...
I have that sourcetype setup on the forwarder side.   on indexer/SH, can't find that specific sourcetype. Should I had to have the props.conf on the indexer too?   IF you mean to update the props.conf to show as KV_MODE = JSON and disable the I_E, iv'e done it on the fowarder side already. UPDATE jusdt found this * When 'INDEXED_EXTRACTIONS = JSON' for a particular source type, do not also set 'KV_MODE = json' for that source type. This causes the Splunk software to extract the JSON fields twice: once at index time, and again at search time. should I still not use IE?
You can go to the UI > Settings > Sourcetypes > armor_json_02 > update the KV_MODE=JSON after disabling the I_E
I have a bucket in fixup tasks in indexer cluster-> bucket status, its been struck.  Both SF & RF. So, both SF and RF are not met in indexer cluster.  I tried to roll and resync bucket manually, tha... See more...
I have a bucket in fixup tasks in indexer cluster-> bucket status, its been struck.  Both SF & RF. So, both SF and RF are not met in indexer cluster.  I tried to roll and resync bucket manually, that didn't work. There're no buckets in excess buckets, i've cleared them like more than 3hrs. Is there any way to meet SF & RF without loosing data or bucket ? Forgot to mention, i had a /opt/cold drive that has I/O error on an indexer. To get it fix i had stop Splunk and remove an indexer from indexer cluster, All other indexers are up and running since last night.  All 45 indexers in cluster-master are up and running and left it to bucket fixup tasks to fix and it also to rebalance overnight. When i check morning there're only 2 fixup tasks left one is in SF & one in RF.
1)  /opt/splunkforwarder/etc/apps/armor/local/props.conf [armor_json_02] /opt/splunkforwarder/etc/apps/armor/local/props.conf AUTO_KV_JSON = false /opt/splunkforwarder/etc/apps/armor/local/props.c... See more...
1)  /opt/splunkforwarder/etc/apps/armor/local/props.conf [armor_json_02] /opt/splunkforwarder/etc/apps/armor/local/props.conf AUTO_KV_JSON = false /opt/splunkforwarder/etc/apps/armor/local/props.conf CHARSET = UTF-8 /opt/splunkforwarder/etc/apps/armor/local/props.conf INDEXED_EXTRACTIONS = json /opt/splunkforwarder/etc/apps/armor/local/props.conf KV_MODE = none /opt/splunkforwarder/etc/apps/armor/local/props.conf LINE_BREAKER = ([\r\n]+) /opt/splunkforwarder/etc/apps/armor/local/props.conf NO_BINARY_CHECK = true /opt/splunkforwarder/etc/apps/armor/local/props.conf SHOULD_LINEMERGE = true /opt/splunkforwarder/etc/apps/armor/local/props.conf category = Structured /opt/splunkforwarder/etc/apps/armor/local/props.conf description = JavaScript Object Notation format. For more information, visit http://json.org/ /opt/splunkforwarder/etc/apps/armor/local/props.conf disabled = false /opt/splunkforwarder/etc/apps/armor/local/props.conf pulldown_type = true   2) i'll try using KV_MODE for JSON isnetad of I_E now.    
We're using this query to retrieve metrics on our hosts:   index=_internal source=*metrics.log group=tcpin_connections | eval sourceHost=if(isnull(hostname), sourceHost,hostname) | rename connect... See more...
We're using this query to retrieve metrics on our hosts:   index=_internal source=*metrics.log group=tcpin_connections | eval sourceHost=if(isnull(hostname), sourceHost,hostname) | rename connectionType as connectType | eval connectType=case(fwdType=="uf","univ fwder", fwdType=="lwf", "lightwt fwder",fwdType=="full", "heavy fwder", connectType=="cooked" or connectType=="cookedSSL","Splunk fwder", connectType=="raw" or connectType=="rawSSL","legacy fwder") | eval version=if(isnull(version),"pre 4.2",version) | rename version as Ver | dedup sourceIp | table connectType, sourceIp, sourceHost, Ver   This gives us everything we need, except for what indexes these hosts are sending data to. I'm aware of this query to retrieve the indexes and the hosts that are sending data to them:   |tstats values(host) where index=* by index     How can I combine the two, either with a join or a sub search where in the table output, we have a column for index, which would give us a list of indexes the hosts are sending to?    
We are still having these ERROR Messages since the upgrade. Never found some evidence or root cause  
Solution in my case was a field marked as required which was missing in the data - after adding it to the data again the issue was solved.
Hi @Udaya Bhaskar.chimakurthy, You can sign up right here - https://accounts.appdynamics.com/trial
If you are just looking to define 'yesterday' as either Sunday or Friday on a Monday then this example shows you how to make a search time range that will be either the previous day, or if Monday and... See more...
If you are just looking to define 'yesterday' as either Sunday or Friday on a Monday then this example shows you how to make a search time range that will be either the previous day, or if Monday and exclude weekends, is is Friday <form version="1.1" theme="light"> <label>ExcludeWeekends</label> <fieldset> <input type="radio" token="weekends" searchWhenChanged="true"> <label>Weekends</label> <choice value="exclude">Exclude Weekends</choice> <choice value="include">Include Weekends</choice> <default>exclude</default> <initialValue>exclude</initialValue> </input> </fieldset> <search> <done> <set token="search_start">$result.search_start$</set> <set token="search_end">$result.search_end$</set> </done> <query>| makeresults | fields - _time | eval now=now() | eval prev_day=if(strftime(now, "%a")="Mon" AND "$weekends$"="exclude", -3, -1) | eval search_start=relative_time(now, prev_day."d@d") | eval search_end=search_start + 86400</query> </search> <row> <panel> <table> <search> <query>index=_audit | bin _time span=1d | stats count by _time</query> <earliest>$search_start$</earliest> <latest>$search_end$</latest> </search> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>
I don't get it, either.  When I plug your numbers into the query I get the expected 21.67.  Can you share a screenshot just so we're sure we're looking at the right numbers?
Try again Verify the lookup file permissions have not changed Make sure no one else is editing the file Make sure no other programs (outside of Splunk) have locked the file or have it open for exc... See more...
Try again Verify the lookup file permissions have not changed Make sure no one else is editing the file Make sure no other programs (outside of Splunk) have locked the file or have it open for exclusive access