All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Check the Cisco - https://splunkbase.splunk.com/app/1467  For every logsource / event type with named OEM prasers are required to parse the data, dashboard and contexualization - do add apps i.e cisco
@sayala Firstly, I would say this is a "not best practise" use of tags for the reasons you are coming up against now.  Surely something like a custom field would be better as you can both populate... See more...
@sayala Firstly, I would say this is a "not best practise" use of tags for the reasons you are coming up against now.  Surely something like a custom field would be better as you can both populate and use in anyway you want and it comes into Splunk too with the container data if you are using the tags for trending etc? I can't see a REST endpoint for tag management at a system level as this would be your best option to do it at any scale.  Unfortunately, for now and without a lot of potential digging, you will need to delete manually.  I would advise you to think of a different way though otherwise you will face a buggy UI going forward.  Hope this helped!? Happy SOARing
The operation of smartstore has been confirmed. I have a question regarding the 100GB of EBS attached to EC2. If you do not put the max_cache_size setting in indexes.conf, Will it freeze if the ca... See more...
The operation of smartstore has been confirmed. I have a question regarding the 100GB of EBS attached to EC2. If you do not put the max_cache_size setting in indexes.conf, Will it freeze if the cache is full to 100GB?   In another test, an EBS created with 10GB would freeze with a full capacity error if max_cache_size was not set.   What I would like to ask is whether if I don't set max_cache_size, it will stop when the volume becomes full.
I have a splunk query which generates output in csv/table format. I wanted to convert this to a json format before writing it into a file. tojson does the job of converting. However the fileds are no... See more...
I have a splunk query which generates output in csv/table format. I wanted to convert this to a json format before writing it into a file. tojson does the job of converting. However the fileds are not in the order I expect it to be. Table output: timestamp,Subject,emailBody,operation --> resulting JSON output is in the order subject,emailbody,operation,timestamp. How do I manipulate tojson to write fields in this order or is there an alternate way of getting json output as expected? 
Hi, I’m trying to enhance the functionality of the "Acknowledge" button in an Splunk IT Service Intelligence  episode. When I click on it, I want it to not only change the status to "In Progress" an... See more...
Hi, I’m trying to enhance the functionality of the "Acknowledge" button in an Splunk IT Service Intelligence  episode. When I click on it, I want it to not only change the status to "In Progress" and assign the episode to me, but also trigger an action such as sending an email or creating a ticket in a ticketing system I’m aware that automatic action rules can be set in aggregation policies, but I want these actions to occur specifically when I manually click the "Acknowledge" button. Is there a way to achieve this? Thanks!
Shouldn't it be something like this? (assuming you want to run it at midnight.) 0 0 * * Monday/2 run my alert  
Dear @sainag_splunk  I tried using the below props.conf: DATETIME_CONFIG = KV_MODE = json LINE_BREAKER = (?:,)([\r\n]+)) NO_BINARY_CHECK = true TIMESTAMP_FIELDS = _time TIME_FORMAT = %2Y%m%d%H... See more...
Dear @sainag_splunk  I tried using the below props.conf: DATETIME_CONFIG = KV_MODE = json LINE_BREAKER = (?:,)([\r\n]+)) NO_BINARY_CHECK = true TIMESTAMP_FIELDS = _time TIME_FORMAT = %2Y%m%d%H%M%S TRUNCATE = 0 category = Structured description = my json type without truncate disabled = false pulldown_type = 1 MAX_EVENTS=1000000 SHOULD_LINE_MERGE = false  But is was same. Umm... I wonder something for your answer,  I applied it deployer server, It will deploy to apps for all of Universal Forwarder.  So if I set the inputs.conf as a below: [batch://C:\splunk\my_data\*.json] index=myIndex sourcetype=my_json crcSalt=<SOURCE> move_policy = sinkhole   The app which address this inputs.conf has above props.conf. However, your answer's concept is not this applied, isn't it? How to apply your answer in my system..? I hope you help me in detail, I'm sorry for I'm begineer in splunk.   My system has 3 search heads, 1 is splunk app, 2 is cluster master and 3 is deployer. In this, 5 indexers.. So the client which is installed UF will send the data to 5 indexers with L/B, and We search in 3 search heads, the results are shown.   Please help me, Thank you.
Hi Tiong.Koh, Thank you for posting to community. Could you clarify how many characters are currently being captured and how many characters you'd need to be shown? This will help in understandin... See more...
Hi Tiong.Koh, Thank you for posting to community. Could you clarify how many characters are currently being captured and how many characters you'd need to be shown? This will help in understanding the scope. From what I understand, the maximum character of query text shown is hardcoded to the maximum of 32,767 characters in database agent. We can’t change this setting, because, Increasing this limit might lead to higher memory consumption on the DB agent side and increased storage usage on the Controller side. ‌ Regards, Martina
can you please try to use loadjob as mentioned in the document.  <search> <query> | loadjob savedsearch="admin:search:SavedSearch" </query> </search>  
Someone mentioned in a previous post if a null value is present as an answer then it can mess with the viz.
File integrity is checked on start up - have you done a restart?
<deleted my own answer>
Given the complexity of the regex, I suspect the sample event may be over-simplified.  However, if it's a matter of the value field is an integer followed by a space then everything goes into the rea... See more...
Given the complexity of the regex, I suspect the sample event may be over-simplified.  However, if it's a matter of the value field is an integer followed by a space then everything goes into the reason field then this rex command will do. | rex "(?<MetricValue>\d+)\s(?<Reason>.*)"  
Actually most of your problem is coming from multiple capture groups inside a capture group designated by each "()" pairing.   | makeresults format=csv data="sample 600 reason and more:then what 70... See more...
Actually most of your problem is coming from multiple capture groups inside a capture group designated by each "()" pairing.   | makeresults format=csv data="sample 600 reason and more:then what 701 code practice Reason 899 something 104 this 12 nothing" | rex field=sample "^(?<Metric>[^\s]+)\s(?<Reason>[^:|^R]+).*$" | table sample Metric Reason   You can see in my example that after the <field> I did not nest additional capture group designations such as what you were using.  The above generates some random data which I hope fits your use case but you provided minimal examples so I made assumptions.  The rex as coded would with draw the information you are looking for assuming that the Metric is the first one the line or field and following that is the Reason with your indicated cut off characters or end of line like I indicated.  Feel free to remove the indicators for beginning of line and end of line if they don't fit your data. Here is the output I get. sample Metric Reason 600 reason and more:then what 600 reason and more 701 code practice Reason 701 code practice 899 something 899 something 104 this 104 this 12 nothing 12 nothing  
probably a basic question i have the following data  600 reason and this rex (?<MetricValue>([^\s))]+))(?<Reason>([^:|^R]+)) what i am getting is 60 in Metric Value and 0 in Reason i presume th... See more...
probably a basic question i have the following data  600 reason and this rex (?<MetricValue>([^\s))]+))(?<Reason>([^:|^R]+)) what i am getting is 60 in Metric Value and 0 in Reason i presume that is due to the match being up to the next NOT space, thus metric value is 60 and 0 remains in the data for Reason what is the right way to do this such that i get value = 600 and reason = reason
Of course. That's what you get when you're writing faster than you're thinking That eventstats should have "BY ip clause" so you get count of distinct values per each separate ip. So | eventstat... See more...
Of course. That's what you get when you're writing faster than you're thinking That eventstats should have "BY ip clause" so you get count of distinct values per each separate ip. So | eventstats dc(name) AS dc BY ip The rest stays the same.
Thanks ccWildcard Executing /opt/splunk/bin/splunk cmd python script.py make sense. Will update my app and retest.
@R15  For monitoring Stanzas, it's still pretty much the same. However, many new type of inputs exists too (modular, scripted, HEC etc...), who do not rely on the fishbucket.
Hi @PickleRick  When I ran the following command, the dc returned 6 for each row     | eventstats dc(name) as dc     dc ip location name 6 1.1.1.1 location-1 name0 6 1.1.1.1 l... See more...
Hi @PickleRick  When I ran the following command, the dc returned 6 for each row     | eventstats dc(name) as dc     dc ip location name 6 1.1.1.1 location-1 name0 6 1.1.1.1 location-1 name1 6 1.1.1.2 location-2 name2 6 1.1.1.2 location-20 name0 6 1.1.1.3 location-3 name0 6 1.1.1.3 location-3 name3 6 1.1.1.4 location-4 name4 6 1.1.1.4 location-4 name4b 6 1.1.1.5 location-0 name0 6 1.1.1.6 location-0 name0 So, the output are still missing1.1.1.5 and 1.1.1.6   Only name0 that exists on multiple rows should be removed.     Thanks for your help     | where name!="name0" OR (name=="name0" AND dc=1)     output dc ip location name 6 1.1.1.1 location-1 name1 6 1.1.1.2 location-2 name2 6 1.1.1.3 location-3 name3 6 1.1.1.4 location-4 name4 6 1.1.1.4 location-4 name4b Expected output: ip location name 1.1.1.1 location-1 name1 1.1.1.2 location-2 name2 1.1.1.3 location-3 name3 1.1.1.4 location-4 name4 1.1.1.4 location-4 name4b 1.1.1.5 location-0 name0 1.1.1.6 location-0 name0
I had this same issue. I built an ansible playbook that needed to run a python script. I got this error when running: /opt/splunk/bin/python script.py What fixed it: /opt/splunk/bin/splunk cmd pytho... See more...
I had this same issue. I built an ansible playbook that needed to run a python script. I got this error when running: /opt/splunk/bin/python script.py What fixed it: /opt/splunk/bin/splunk cmd python script.py Not sure if you're having the same problem but for some reason the request module doesn't load right of handle ssl if you do /opt/splunk/bin/python but DOES work correctly if you use /opt/splunk/bin/splunk cmd python. Hope it helps!