All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I am trying to ingest botsv2 and botsv3 indexed data into security essentials for demo and learning purposes, but the onboarding background search only checks the data in the last 30 days, the t... See more...
Hi, I am trying to ingest botsv2 and botsv3 indexed data into security essentials for demo and learning purposes, but the onboarding background search only checks the data in the last 30 days, the two types of BOTs datasets are about 6 years ago,  I want to know how to modify such onboarding search to expand its search time?
What have you tried?  What is the result of your trial(s)?  Or is this some sort of homework that everybody is posting for an answer? (Yes, this exact same question came up again and again in recent ... See more...
What have you tried?  What is the result of your trial(s)?  Or is this some sort of homework that everybody is posting for an answer? (Yes, this exact same question came up again and again in recent days.) The syntax of fillnull is no secret: |fillnull App1 App2 App3 App4 Now, you just show us how you use it and what results do you get.  No cheating:-)
I check this however it was not matching. don't you think it should be as below ()\w{3}\s\d+\s+\d+\:\d+\:\d+\s+ However post updating this as well it is not working. Does it work only for new event... See more...
I check this however it was not matching. don't you think it should be as below ()\w{3}\s\d+\s+\d+\:\d+\:\d+\s+ However post updating this as well it is not working. Does it work only for new events post changes or the historical one as well? and how often it gets updated (the config changes)
Hi @gcusello , thank you so much the solution which you gave works perfectly. But We don't have role to upload lookup files  we are planning to reduce the number of hosts list like each env has 2 ho... See more...
Hi @gcusello , thank you so much the solution which you gave works perfectly. But We don't have role to upload lookup files  we are planning to reduce the number of hosts list like each env has 2 hosts  so in this scenario how to get this done pls suggest???
Hello I have the following sample log lines from a splunk search query      line1 line2 line3: field1 : some msg line4 line5 status: PASS line6 line7 line3: field2: some msg line8 line9: status: P... See more...
Hello I have the following sample log lines from a splunk search query      line1 line2 line3: field1 : some msg line4 line5 status: PASS line6 line7 line3: field2: some msg line8 line9: status: PASS line1 line2 line3: field3: some msg line4 line5: status: PASS line1 line2 line3: field4: some msg line4 line5: status: PASS       I want to write a transaction to return lines between field1, status: PASS  field2, status: PASS field3: status:PASS and so-on I have tried the following search query with multiple startswith values   index="test1" source="test2" run="test3" | transaction source run startswith IN ("field1", "field2", "field3") endswith="status: PASS"   Instead of using IN keyword for startswith, I want to use a csv lookup table messages.csv Sample messages.csv content   id,Message 1,field1 2,field2 3,field3 4,field4   I want to write splunk transaction command with startswith parameter containing each Message field from messages.csv My inputlookup CSV file may have 100 different rows with different messages There is also a chance that my splunk search results may not have any entries with lines containing field1, field2, field3, field4 Can someone please help on how to write splunk transaction where startswith needs to be run for each Message in messages.csv?
We are seeing a very different issue, 1.As shown  in a table when there are no logs for any one of the List rows are returning null values. eg: App2 and App3 column. used |fillnull value=0 its not w... See more...
We are seeing a very different issue, 1.As shown  in a table when there are no logs for any one of the List rows are returning null values. eg: App2 and App3 column. used |fillnull value=0 its not working. 2. If we have data for any one of the Service(eg: service=fast, App4 column), it showing the count and rest of the rows are filling up with zero. how to fill the null values as zero even when there are no single count for the columns?? Query:  index=testindex source=sourcelogs |rex field= _raw "service :\s(?<Service>\w+)" | rex field= _raw "(?<List>Received main message|Application published to service|List status are in process|Application process running successfully)" |chart count over Service by List |rename "Received main message" as App1 "Application published to service" as App2 "List status are in process" as App3 "Application process running successfully" as App4 |table App1 App2 App3 App4 Result of the query: Service App1 App2 App3 App4 Token 10     0 Modern 40     0 Surb 3     0 Fast 12     4 Normal 4     0 Forward 6     0 Medium 7     0
How to check if the host has been correctly whitelisted to receive configuration from Splunk Deployment Server?
Hi, I am lily. I want to know how to customize the MLTK model using in ESCU rules. If it doesn't, it is possible to check the contents of models inside the MLTK?   <example> I want to know how 'c... See more...
Hi, I am lily. I want to know how to customize the MLTK model using in ESCU rules. If it doesn't, it is possible to check the contents of models inside the MLTK?   <example> I want to know how 'count_by_http_method_by_src_1d' was made  
The number of values in the subsearch cannot be too large as it will perform really badly, but one slight change to @ITWhisperer subsearch is to do <main index> NOT [ search <other source> NoHan... See more...
The number of values in the subsearch cannot be too large as it will perform really badly, but one slight change to @ITWhisperer subsearch is to do <main index> NOT [ search <other source> NoHandlerFoundException | stats values(xCorrelationId) as search | format] which will perform faster. It will change the outer search from <main index> NOT ( ( ( xCorrelationId=A OR xCorrelationId=B OR... ) ) ) to <main index> NOT ( ( ( A OR B OR C OR D ... ) ) ) where A, B etc are the values or xCorrelationId The key point is having a field name 'search' in the output rather than xCorrelationId, which changes the effect of the format command.
As @PickleRick says, without knowing your data, it's not totally clear what may work in your case. Are you actually searching the same index in both cases? From your search example, the subsearch wi... See more...
As @PickleRick says, without knowing your data, it's not totally clear what may work in your case. Are you actually searching the same index in both cases? From your search example, the subsearch will contain the events from the outer search. You can generally combine two data sets into a single search and do some data munging before doing  stats values() so  (index="index" "mysearchtext") OR (index="index") ``` This will get a field called mr_id from any event that has a field called message that has the pattern match ``` | rex field=message ", request_id: \\\"(?<mr_id>[^\\\"]+)" ``` This statement will extract the request.id from a JSON payload ``` | spath request.id ``` This creates a common field called id, which is either taken from a non null request.id or from the mr_id. Depending on the event type the id is extracted from, you should hopefully end up with a single id. Note that if your source index really is "index" then this assumes that the id coming from events with mysearchtext will NOT also have a JSON payload with the request id ``` | eval id=coalesce('request.id', mr_id) ``` This then aggregates the fields by the common id. Note it uses values() so you don't know which event came at which time but that can be solved if needed ``` | stats values(mynewfield) as mynewfield values(_time) as times by request_id ``` This just formats the multivalue times field to human readable | eval times=strftime(times, "%F %T.%Q")  
@GaryZ Unfortunately you can't go from chart to table. Look at the XML and you will see that  <chart> or <table> are completely separate XML types and have a whole different bunch of configuration o... See more...
@GaryZ Unfortunately you can't go from chart to table. Look at the XML and you will see that  <chart> or <table> are completely separate XML types and have a whole different bunch of configuration options. The token replacement mechanism simply replaces the   <option name="charting.chart">$chart$</option>   with the appropriate piece of text - all other options being equal, the replacement will work. The way to solve this problem is to have both panels, one for charts and one for table and use token dependency to switch between the panels, like this example. Note that it uses a base search which is used to populate both searches, so no duplication. You can then use any post processing in a panel that requires additional processing to suit the visualisation.   <form version="1.1"> <label>Visualisation Selection</label> <search id="base_panel_data"> <query> | makeresults count=5000 | eval car=mvindex(split("Volvo,Mercedes,VW,Porsche,Jaguar,Tesla,BYD,Toyota,Suzuki",","), ((random() % 97) * (random() % 71)) % 9) | stats count by car </query> </search> <fieldset submitButton="false"></fieldset> <row> <panel> <input type="dropdown" token="viz_type" searchWhenChanged="true"> <label>What viz type</label> <choice value="pie">Pie</choice> <choice value="bar">Bar</choice> <choice value="line">Line</choice> <choice value="column">Column</choice> <choice value="table">Table</choice> <change> <condition value="table"> <unset token="show_chart"></unset> <set token="show_table"></set> </condition> <condition> <set token="show_chart"></set> <unset token="show_table"></unset> </condition> </change> </input> <chart depends="$show_chart$"> <search base="base_panel_data"> <query> </query> </search> <option name="charting.chart">$viz_type$</option> <option name="charting.drilldown">all</option> </chart> <table depends="$show_table$"> <search base="base_panel_data"> <query> </query> </search> </table> </panel> </row> </form>    
Hi @Splunkerninja, If your sample data is: Sample data :  nCountry: United States\nPrevious Country   Can you try updating your field extraction to use: nCountry:\s(?<country>.+?)\\nPrevious\sCo... See more...
Hi @Splunkerninja, If your sample data is: Sample data :  nCountry: United States\nPrevious Country   Can you try updating your field extraction to use: nCountry:\s(?<country>.+?)\\nPrevious\sCountry   Only two tweaks: No need to escape the : character The text in the sample is "\n"  - we need to escape the backslash with two backslashes. If you use three that translates to a single backslash followed by a new line.  
Your best bet to switch from a chart to a table is to show/hide pre-built panels using tokens.  Tables have different options in the XML code - e.g. column formatting, coloring, drill-downs, highlig... See more...
Your best bet to switch from a chart to a table is to show/hide pre-built panels using tokens.  Tables have different options in the XML code - e.g. column formatting, coloring, drill-downs, highlighting when the mouse hovers. None of these options are relevant for a chart visualization type. The main reason you can't use tokens to change from a chart to a table vis is that the charts use a <chart> tag, while the table vis uses a <table> tag. Simple XML doesn't support using tokens to set XML tags in the dashboard code like that. The cleanest way, in my opinion, is to have hidden panels that you switch between using tokens.
As @yuanliu says, my queries give you a table - if you are indicating the table does not come back in the order the panels are defined in the CSV, that's unfortunately a feature of Splunk. You can a... See more...
As @yuanliu says, my queries give you a table - if you are indicating the table does not come back in the order the panels are defined in the CSV, that's unfortunately a feature of Splunk. You can add this final line to order the columns as per the CSV | fields [ | inputlookup panels.csv | eval Panels="\"".Panels."\"" | stats list(Panels) as Panels | return $Panels ]
Unfortunately adding KV stores does require a level of privileges - I believe you need admin_all_objects. You do have to create the collections and transforms conf files - I suspect you will need to... See more...
Unfortunately adding KV stores does require a level of privileges - I believe you need admin_all_objects. You do have to create the collections and transforms conf files - I suspect you will need to run this past the admin, as they will most likely have to create it for you - the Splunk app for lookup file editing does allow you to create KV store definitions, both collections and transforms, but you will need those privileges. There mey be some environmental issues around using the KV store in that admins would like to know about
Thanks. Is there a count that I can limit this to? I makes the call but never comes back with data where I have to kill the process. 
The question is what actually is in this event. After all Splunk wouldn't escape a newline character. Depending on props for the data it would either break input stream into separate events or break ... See more...
The question is what actually is in this event. After all Splunk wouldn't escape a newline character. Depending on props for the data it would either break input stream into separate events or break the line into multiline event. It would not get rendered as \n. I think similar thing goes for \x80 and \x9c - they are not (at least in ascii-derivative encodings) control characters but extended characters). So unless I'm missing something, this means that this is not Splunk escaping data (as it does sometimes with control characters) but these are actually raw character sequences received on input. I also don't understand why it should break json parsing. After all these are properly escaped characters so they should be parsed out of the json data... My bet would be that someone didn't rely on automatic json extraction but instead fiddled with regexes to parse fields out of this sourcetype. Edit: No. I forgot one thing. Json specification only allows specifying extended characters by unicode as \uXXXX. The \xXX notation is not allowed. That's why json parsing fails.
Ok. If you have two searches with a significantly different performance (one is taking much longer to complete than the other or returns way more events) and you deal with subsearches, you want to pu... See more...
Ok. If you have two searches with a significantly different performance (one is taking much longer to complete than the other or returns way more events) and you deal with subsearches, you want to put the "smaller" one into subsearch. (Subsearches have their limitations and with a smaller search it's kess probable that the subsearch will get silently finalized returning wrong/incomplete results). As to replacing join, the typical way is to use stats. In a similar case to yours (we don't know your events and the search you presented is incorrect) it would be something like <first search conditions> | fields a b c d | rename d as common_field | append [     <second search conditions>     | fields e f g h     | rename g as common_field ] | stats values(*) as * by common_field This is more or less the only way if you need to transform your data in both searches separately (for example have different stats aggregations in them). If you have only streaming commands, you can try to either use multisearch to overcome the limitations of the append command or alternatively search with both sets of conditions and conditionally set/chose/transform fields you need depending on which of result sets they come from. (This is probably the most effective way in some cases but not always applicable and can be quite messy to write)
Hello everyone. I need to create a metric or Health Rule, which does the following: Warning : 15% of calls response time >= 50 secs Critical: 30% of calls with response time >= 50 secs Critical: ... See more...
Hello everyone. I need to create a metric or Health Rule, which does the following: Warning : 15% of calls response time >= 50 secs Critical: 30% of calls with response time >= 50 secs Critical: 10% of calls with error. Is this possible with AppDynamics?? I'm trying with this formula: ({n_trx_rt}>=50000/{total_trx})*100 Where n_trx_rt = Average Response Time total_trx = Calls per minute This gives me a result, but I'm not sure if the operation is supported by AppDynamics.
You could start by integrating your Splunk with LDAP so that your dashboard searches the AD data using LDAP queries. This app should help: SA-ldapsearch - https://splunkbase.splunk.com/app/1151