All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@richgalloway  I am trying to extract them using RegEx. I select the event, choose Action, the Extract Fields, and select the method of extraction by regular expression.
You mean - for example -  having //1.2.3.4/idx1 and //1.2.3.4/idx2 mounted to /srv/splunk_cold on idx1 and idx2 respectively? Yes, that will work. Of course performance of searching over NFS will no... See more...
You mean - for example -  having //1.2.3.4/idx1 and //1.2.3.4/idx2 mounted to /srv/splunk_cold on idx1 and idx2 respectively? Yes, that will work. Of course performance of searching over NFS will not be stellar and you might regret not using local storage but from the technical point of view it will work.
Your question is not clear. If you want to make your ingested data CIM-compliant you should do as @marnall says - create tags, make sure your fields are either CIM-conformant or create calculated fi... See more...
Your question is not clear. If you want to make your ingested data CIM-compliant you should do as @marnall says - create tags, make sure your fields are either CIM-conformant or create calculated fields and aliases to make them CIM-conformant. But as you're speaking about dashboards - if you want to use datamodels, just do that - search or do tstats over datamodels, not raw data. And use those searches to power your dashboard panels.
Ok I understand. And considering having two different NFS volumes on a SAN, one volume for each indexer, but the mounting point on the OS will have the same name for both indexers Can this solu... See more...
Ok I understand. And considering having two different NFS volumes on a SAN, one volume for each indexer, but the mounting point on the OS will have the same name for both indexers Can this solution work ?
1. Haven't we discussed it on Slack yesterday? (or was I discussing that with another person? The sourcetype was the same and the case was similar) 2. Your LINE_BREAKER should get rid of the "event"... See more...
1. Haven't we discussed it on Slack yesterday? (or was I discussing that with another person? The sourcetype was the same and the case was similar) 2. Your LINE_BREAKER should get rid of the "event": part already (it's within the capture group so it should be treated as line breaker and stripped). So apparently your settings are not applied at all. I'd say you probably have your props set on a wrong component.
@yuanliu Yes, I had to convert them to XML, so that I could extract the fields I needed. The logs are in French, and I was having issues parsing them
I have two searches, one search will produce icinga problem alerts and other search will produce icinga recovery alerts. I wanted to compare host with State fields, if the icinga alert has been recov... See more...
I have two searches, one search will produce icinga problem alerts and other search will produce icinga recovery alerts. I wanted to compare host with State fields, if the icinga alert has been recovered within 15 minutes duration no action to be taken else execute script. First search, below is the snippet.   Second query, below is the snippet    
eventtype=msad-rep-errors (host="*")|lookup EventCodes EventCode,LogName OUTPUTNEW desc|eval desc=if(isnull(desc),"Unknown EventCode",desc) | stats count by host,Type,EventCode,LogName,desc | lookup ... See more...
eventtype=msad-rep-errors (host="*")|lookup EventCodes EventCode,LogName OUTPUTNEW desc|eval desc=if(isnull(desc),"Unknown EventCode",desc) | stats count by host,Type,EventCode,LogName,desc | lookup DCs_tier0.csv host OUTPUTNEW domain offset_value | search offset_value=1 | search (host="*") (domain="*") | table host domain Type EventCode LogName desc    
No. Regardless of whether it's Splunk or any other solution that assumes it has full control over its data (in this case - contents of the colddb directory) configuring multiple instances of "somethi... See more...
No. Regardless of whether it's Splunk or any other solution that assumes it has full control over its data (in this case - contents of the colddb directory) configuring multiple instances of "something" over the same set of data is a pretty sure way to disaster. BTW, smartstore works differently than your normal storage tiering. Since it's an object storage and you can't just access files randomly, it uses a cache manager to bring whole buckets to cache when they're needed. It is good with some use cases but with others (frequent searching across multiple historical buckets not fitting on warm storage in total) it can cause performance headaches.
Wait a second. You're trying to say that regardless of what timezone you set in your preferences the event is still shown at the same time for the same event? (The time on the left, not the time with... See more...
Wait a second. You're trying to say that regardless of what timezone you set in your preferences the event is still shown at the same time for the same event? (The time on the left, not the time within the event itself obviously since this one is already ingested, indexed and it won't change). That should be impossible. BTW, what does your ingestion architecture look like for this source? File->UF->indexer? Where do you have your props.conf settings (on which component)?
You can do inline extraction with rex, e.g. | rex "lda\((?<to>[^\)]*)\)" which will extract a new field called to from the portion between the brackets  You can also set this up as a field extract... See more...
You can do inline extraction with rex, e.g. | rex "lda\((?<to>[^\)]*)\)" which will extract a new field called to from the portion between the brackets  You can also set this up as a field extraction - see Fields->Field Extractions and create a new field extraction there using the regex above and then, if lda(xxx) exists in your data, you will get a field called to  
I am also facing same issue, have you got any solution for this?
Hi @atr , check also the other hardware reuirement, to avoid next issues. let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splun... See more...
Hi @atr , check also the other hardware reuirement, to avoid next issues. let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi marnall, thanks for the answer. So there is no chance to put cold datas on a NFS network storage without implementing smartstore ?  
Hi @marka3721 , You are right! sorry I confused f5 with fortinet! Anyway, take the transformation you find in the add-on transforms.conf and try it out. the transformations to search and verify in... See more...
Hi @marka3721 , You are right! sorry I confused f5 with fortinet! Anyway, take the transformation you find in the add-on transforms.conf and try it out. the transformations to search and verify in transforms.conf are: f5_bigip-icontrol-locallb, f5_bigip-icontrol-globallb, f5_bigip-icontrol-networking, f5_bigip-icontrol-management, f5_bigip-icontrol-system-systeminfo, f5_bigip-icontrol-system-statistics, f5_bigip-icontrol-system-disk, f5_bigip-icontrol-management-device, f5_bigip-icontrol-networking-interfaces, f5_bigip-icontrol-networking-adminip, f5_bigip-icontrol-locallb-pool, f5_bigip-icontrol-management-usermanagement. check if those regexes match your data or you need to modify them to adapt to your logs. If you have to modify them, remember to copy the thansforms.conf file in the local folder before modifying it. Ciao. Giuseppe
Your code looks fine
Hi @DoubleAka , your message seems to be in json, so if you delete part of the message (for example the first part) you lose the formatting and you can no longer use field extraction tools such as I... See more...
Hi @DoubleAka , your message seems to be in json, so if you delete part of the message (for example the first part) you lose the formatting and you can no longer use field extraction tools such as INDEXED_EXTRACTIONS or spath, furthermore you save very little by deleting just one word. In any case, the SED_CMD command uses a substitution regex and the one you used is wrong because quotes must be escaped and you missed the global parameter: SEDCMD-strip_event = s/^\"event\":\{\s*//g Ciao. Giuseppe  
Your requirement isn't really clear.  Not to point to the obvious difference between last (set in first panel) and $latest$ (used in second panel), but are you sure you can even add an additional fie... See more...
Your requirement isn't really clear.  Not to point to the obvious difference between last (set in first panel) and $latest$ (used in second panel), but are you sure you can even add an additional field in the first panel and still maintain your original timechart? (Hint: It will ruin it all; at least it will distort the chart.) Another important question is: What is that $latest$ expected  supposed to be?  It seems that you want it to be the interactive token because you set it according to _time which varies by row.  I already mentioned that setting a new field after timechart will ruin your chart.  But in addition, Dashboard Studio has its own regiment to manage tokens.  You cannot set a variable in one search and call that variable with $$ and expect it to be a passable token.  This is the document about setting interactive token with search result: Setting tokens from search results or search job metadata. Then, to add 1 week to the click value, run that result in another search. (Just like you would do in Simple XML.)  Lastly, use result from that search to drive the second panel.  Here is an example: { "visualizations": { "viz_7yE1ZwsT": { "type": "splunk.line", "dataSources": { "primary": "ds_DmIKSSCN" }, "title": "First panel", "eventHandlers": [ { "type": "drilldown.setToken", "options": { "tokens": [ { "token": "latest_tok", "key": "row._time.value" } ] } } ], "options": { "legendDisplay": "top" } }, "viz_OIqDnl0b": { "type": "splunk.line", "options": { "legendDisplay": "bottom" }, "dataSources": { "primary": "ds_79fdaiuf" }, "showProgressBar": false, "showLastUpdated": false } }, "dataSources": { "ds_DmIKSSCN": { "type": "ds.search", "options": { "query": "| tstats count where index=_internal by _time span=1d sourcetype\n| timechart span=1d sum(count) by sourcetype\n| eval _last = relative_time(_time, \"+1w\")" }, "name": "first panel" }, "ds_79fdaiuf": { "type": "ds.search", "options": { "query": "index=_introspection latest=$make token:result.week_after$\n| timechart span=1d count by sourcetype" }, "name": "dependent panel" }, "ds_EHm1QhZI": { "type": "ds.search", "options": { "query": "| makeresults\n| eval week_after = relative_time($latest_tok$, \"+1w\")", "enableSmartSources": true }, "name": "make token" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-3w@w,now" }, "title": "Global Time Range" } }, "layout": { "type": "grid", "options": { "width": 1440, "height": 960 }, "structure": [ { "item": "viz_7yE1ZwsT", "type": "block", "position": { "x": 0, "y": 0, "w": 1440, "h": 400 } }, { "item": "viz_OIqDnl0b", "type": "block", "position": { "x": 0, "y": 400, "w": 1440, "h": 400 } } ], "globalInputs": [ "input_global_trp" ] }, "description": "https://community.splunk.com/t5/Splunk-Search/Dashboard-Studio-earliest-latest-tokens/m-p/691740", "title": "Pass time token" } In this dashboard, when you click a point on July 13 in the first panel, the second panel will end on July 20.  Is this something you are looking at?
as mention we drop one of the "s/" and also the "g" at the end: SEDCMD-CLean_powershell_800 = s/\n\s+Context Information\:.*([\r\n]+.*){0,500}// SEDCMD-CLean_powershell_4103 = s/\s+Context\:.*([\r\... See more...
as mention we drop one of the "s/" and also the "g" at the end: SEDCMD-CLean_powershell_800 = s/\n\s+Context Information\:.*([\r\n]+.*){0,500}// SEDCMD-CLean_powershell_4103 = s/\s+Context\:.*([\r\n]+.*){0,500}//
@PickleRick  When I am using this time preference then there is no difference showing. So its good to setup this setting ? Is there anything else you want me to  suggest for fix ?