All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Apologies, this was difficult to try to explain via text. I have a MV field and am iterating through it and using a regex to create multiple capture groups, then create a new field using some those ... See more...
Apologies, this was difficult to try to explain via text. I have a MV field and am iterating through it and using a regex to create multiple capture groups, then create a new field using some those capture groups.  That new field is colon separated. Currently, I noticed that within my 3rd capture group, the values within the MV field can sometimes have non-alphanumeric characters which is causing the regex to not match (due to the regex being [\w\s]). So... modify the regex to capture everything!  But... what about when the special character is a colon ( : )?  In that scenario, it will then add an additional colon in my new colon separated field which will make that entry invalid due to nonconformity to the pattern. I thought, why not just get rid of every non-alphanumeric character that will be in the 3rd capture group before I create the new field so there aren't issues.  Which then brought me here as I cannot seem to find a way to do that. Instead, I am now thinking it may be better to simply capture all then clean up the new field instead as that will not be a MV field.  Maybe I can use regex and sed  to eliminate any special characters in the new field, just need to figure out how to account for the case when that character is a colon.  Since its the 3rd capture group, I would need the pattern to have 4 colons before that part of the field and 7 colons after it. cpe:2.3:a:\2:\3:\5:*:*:*:*:*:*:* - \1 - \4  
Hi all, How can we resolve the issue of Cycognito Correlation search not triggering any alerts in Splunk over the past month? index= cycog sourcetype="cycognito:issue" severity="Critical" | stats ... See more...
Hi all, How can we resolve the issue of Cycognito Correlation search not triggering any alerts in Splunk over the past month? index= cycog sourcetype="cycognito:issue" severity="Critical" | stats count, values(affected_asset) as affected_asset, values(title) as title, values(summary) as description, values(severity) as severity, values(confidence) as confidence, values(detection_complexity) as detection_complexity, values("evidence.evidence") as evidence, values(exploitation_method) as exploitation_method, earliest(first_detected) as first_detected, latest(last_detected) as last_detected, values(organizations) as organization by cycognito_id | eval date_found=strptime(first_detected,"%Y-%m-%dT%H:%M:%S.%QZ") | eval control_time = relative_time(now(), "-24h") | where date_found >control_time   Thanks in advance..
Hi @yogeshgs , My splunk cloud instance does not have Data manager app and as per my understanding it ships with instance and cant be installed seperately. Can you guide what to do in this case if ... See more...
Hi @yogeshgs , My splunk cloud instance does not have Data manager app and as per my understanding it ships with instance and cant be installed seperately. Can you guide what to do in this case if I need this Data Manager for my instance. Any response will be appreciated and thanks in advance. 
Hi @burwell ,  Yes, this did fix my issue. I adjusted the default 2p to represent 5 days worth of time in seconds. Now when I check job manager when the alert is triggered, I see the expire time i... See more...
Hi @burwell ,  Yes, this did fix my issue. I adjusted the default 2p to represent 5 days worth of time in seconds. Now when I check job manager when the alert is triggered, I see the expire time is 5 days away now.  Thanks
My data model is searching for all windows logins.  index=* EventCode=4624 OR (EventCode=4625 OR ((EventCode=4768 OR EventCode=4771 OR EventCode=4776) status="failure")) NOT (user=*$) NOT (user=syst... See more...
My data model is searching for all windows logins.  index=* EventCode=4624 OR (EventCode=4625 OR ((EventCode=4768 OR EventCode=4771 OR EventCode=4776) status="failure")) NOT (user=*$) NOT (user=system) NOT (user=*-*) with this search i get a field called dest_nt_domain.  This field will have results as - Test Test.local other My above search has the rex command to remove everything after the period.  I finally have a kvlookup called Domain with a field of name.  It contains one value - Test.  Im wanting to evaluate the above data vs the one value in my kvlookup.  
Given that per host there are 2 events logged, one indicating transition to active and one indicating transition to inactive.  I cant figure out a query that can accurately do this per host given the... See more...
Given that per host there are 2 events logged, one indicating transition to active and one indicating transition to inactive.  I cant figure out a query that can accurately do this per host given the following stipulations. Given the first event within the query time range, it can be assumed the host was in the opposite state prior. Only calculate transitions between the 2 states, if there are multiple same events within transitions, calculate of the time of the first occuring. Include the latest condition up until the time the search is run.
Ok. Honestly, you lost me here. What does have filtering characters have to do with extracting fields? Either you filter characters and parse resulting event or parse out fields than filter each fiel... See more...
Ok. Honestly, you lost me here. What does have filtering characters have to do with extracting fields? Either you filter characters and parse resulting event or parse out fields than filter each field on its own. Or do I miss something?
OK, and how is your question connected to Splunk?
Because I need to create the new field for every entry.  If I were to implement that, then it would simply not match that entry and move on, effectively ignoring it.  So I'm trying to find a way to c... See more...
Because I need to create the new field for every entry.  If I were to implement that, then it would simply not match that entry and move on, effectively ignoring it.  So I'm trying to find a way to clean this portion of the data such that I can capture every entry.  Right now I'm only matching on entries that contain [\w\s] but I'm missing a bunch.  True, [\X] would ignore less, but I'm trying to ignore none and capture all, but avoid problems by cleaning/manipulating the text before capturing it within the capture group.
I have a feeling that you're thinking in SQL and want to bring the same paradigm to Splunk. Try describing what data you have and what you want to get as a result. We'll see how to get there.
Hi @tushar.darekar, I want to make sure I understand your question. You are asking what is the max number of Pages, iframes, Virtual Pages and Ajax calls you can exclude?
Ok, why not do s/[^\X]/_/g or something similar?
My current search that is working is -  | from datamodel:Remote_Access_Authentication | rex field=dest_nt_domain "^(?<dest_nt_domain>[^\.]+)" | join dest_nt_domain [|inputlookup Domain | rename nam... See more...
My current search that is working is -  | from datamodel:Remote_Access_Authentication | rex field=dest_nt_domain "^(?<dest_nt_domain>[^\.]+)" | join dest_nt_domain [|inputlookup Domain | rename name AS dest_nt_domain | fields dest_nt_domain] | table dest_nt_domain My problem is that this search only returns values that match.  How can I change this to an evaluation?  If the two items match "Domain Accout" if != "Non Domain Account" My input lookup only contains one item.  
Not quite - your fieldformat is using strftime rather than tostring
Generally speaking the 403 error is on the server side.  You say you are trying to download a trial, and that you "Managed to register it. "  I don't know what that latter means?  If it's a trial, yo... See more...
Generally speaking the 403 error is on the server side.  You say you are trying to download a trial, and that you "Managed to register it. "  I don't know what that latter means?  If it's a trial, you download it, install it and it just works.  They're *all* trials until you put in a real license key (or connect it to your license master).[1] You could try a) just logging into a different place in Splunk, like docs.splunk.com.  I think once you are logged in there you can then go back to splunk.com and see if it all still works. b) Have you tried just creating a new account and seeing if you can download it then?  I'm not aware of any actual process to this, you just "create new account" and fill in a few pieces of information then off to the downloads you go. Anyway, hope one of these gets you working again!     [1] LOL, this has actually been a complaint of mine for a decade now.  I don't want to download a trial and convert it, I want to feel like I'm downloading the actual paid-for version that $company owned.  But it's just words, so 'whatever'.   
Hi, We have a datamodel built against application data. All the tstats searches against the DM were running fine, including the ones using summariesonly=true. I was noticing some discrepancy betwee... See more...
Hi, We have a datamodel built against application data. All the tstats searches against the DM were running fine, including the ones using summariesonly=true. I was noticing some discrepancy between data model and raw data when plotting timechart for the exact same time range. Checked on the Data Model and found _time field was not added. But after adding that and re-accelerating the data model, now i cant use summariesonly=true. No results are returned.  I do get data back without summariesonly=true.    What could have gone wrong here?   UPDATE I am able to search using  summariesonly=true (Maybe DM needed more time to regenerate) but now I see massive difference in counts between  summariesonly=true. Vs false. Data with false closely matches the raw data stats. Before that _time change, even  summariesonly=true was matching the counts precisely.  I see the _time field is set to "required" in the model but I don't think that would be preventing certain events from going into summary. All events in raw data do have default _time field.  Am I missing some key fact here on how summary calculation might have changed with addition of this _time field?
Well, tstats works with more than just time fields.  The limitation is that it only works with fields that are created *at* the time the events are indexed. https://docs.splunk.com/Documentation/Spl... See more...
Well, tstats works with more than just time fields.  The limitation is that it only works with fields that are created *at* the time the events are indexed. https://docs.splunk.com/Documentation/Splunk/9.1.3/Indexer/Indextimeversussearchtime I honestly think some of that information about performance or whatever is outdated, but most of that's all still fine documentation. Index time fields are those created when the data's indexed.  By default it's just the built-in fields, like _time, sourcetype, and so on. In some cases it's all the fields, for instance with INDEXED_EXTRACTIONS=<json/csv/whatever>. But otherwise Splunk generally relies on search time fields - fields that are built "on the fly" as you run your search.  It's more flexible and doesn't go 'out of date' as events change or your needed fields change around. The docs above should have links off them to explain more, but that's the gist of it.
I tried the same concept for a different query and did not run: This one calculates how much time took the alert to be closed on the incident manager  
Hi @LinghGroove , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
ok thanks a lot @gcusello  buona giornata