All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Marcie.Sirbaugh, This is what I was able to find in AppD Docs: https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/install-app-server-agents/java-agent/install-the-java-agen... See more...
Hi @Marcie.Sirbaugh, This is what I was able to find in AppD Docs: https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/install-app-server-agents/java-agent/install-the-java-agent/agent-installation-by-java-framework/mule-esb-startup-settings You can also try contacting your AppD Rep, or even Professional services for a further investigation.
It seems you're trying to do xyseries - transform a series of values into a x-y chart. The problem with this is that you can only have one field on each axis and you want two fields on one of them. ... See more...
It seems you're trying to do xyseries - transform a series of values into a x-y chart. The problem with this is that you can only have one field on each axis and you want two fields on one of them. But fear not, you can always do a trick of "combine and then split". <your_search> | eval orgbranch=Org.":".Branch | xyseries orgbranch Role Name | eval Org=mvindex(split(orgbranch,":"),0) | eval Branch=mvindex(split(orgbranch,":"),1) | fields - orgbranch
did you fix it ? 
I read many articles about it but no one knows how to fix it.  so how can I fix it?  Error in 'IndexScopedSearch': The search failed. More than 1000000 events were found at time 1675957850.
You don't need ingest actions to mask your data. You can either use SEDCMD functionality or properly crafted TRANSFORM. There are two things wrong with your TRANSFORM. 1. Your regex does not match... See more...
You don't need ingest actions to mask your data. You can either use SEDCMD functionality or properly crafted TRANSFORM. There are two things wrong with your TRANSFORM. 1. Your regex does not match properly. Use https://regex101.com/ to test your regexes 2. The REGEX part of the TRANSFORM definition specifies a regex which must match for the event to be processed by the transform (and possibly captures parts of it) but the DEST_KEY and FORMAT define whole contents of the resulting field. So if you do DEST_KEY=_raw and match the event by REGEX, _whole event_ will be overwritten with what you specify as FORMAT, not just the matched part, if the REGEX matches a part of the event. In other words if you did REGEX=. FORMAT=aaaaa DEST_KEY=_raw your transform would match every non-empty event since the REGEX matches any character but it would overwrite the whole event, not just one character, to the string "aaaaa".
How does this look? [address_masking] REGEX = (\\\"addressLine1\\\":\\\")([^\\\"]+)(\\\") FORMAT = $1(masked)$3
1. McAfee is a brand. There are several products which bear this name (like McAfee EDR, McAfee Web Gateway...); BTW, if I remember correctly, it's Trellix now. You're probably talking about one parti... See more...
1. McAfee is a brand. There are several products which bear this name (like McAfee EDR, McAfee Web Gateway...); BTW, if I remember correctly, it's Trellix now. You're probably talking about one particular McAfee solution. 2. Firstly make sure that you're ingesting the events from this solution into your Splunk infrastructure. 3. Typically if there is a "rule" which is triggered, event generated from such rule contains the rule identifier in its body. Most often it's just enough to search for this identifier.
Great! Thanks @PickleRick , I'll give that a shot!
If you already ingested some data into this index and you don't need/want it in your instance anymore you can set a short retention period so that data is quickly rolled out and removed from the inde... See more...
If you already ingested some data into this index and you don't need/want it in your instance anymore you can set a short retention period so that data is quickly rolled out and removed from the index. It's the easiest and most elegant way.
Hi!   I want to write a query that will show me all the events that jumped because of a certain rule that I set in Mcafee. How do I do this? thank you  
Any chance this is available now?
OK. Answer to this question is not that straightforward. But often the original question might not be exactly what you need to get from your data. Anyway. The first and, let's be honest, worst idea... See more...
OK. Answer to this question is not that straightforward. But often the original question might not be exactly what you need to get from your data. Anyway. The first and, let's be honest, worst idea, would be to create a real-time alert with a window of 10 minutes and trigger an alert when there are no results for your search. But this is a very bad idea! Using real-time searches (and alerts) is generally a bad practice since they hog up resources. So you should be checking for the results returned by historical searches. You should look at your data "backwards" and verify whether there was an event when there should be one. Now, your wording is a bit confusing and ambiguous. Firstly, what does "every 1 hour with 30 minutes interval" mean? Either you do something every 1 hour or with 30 minutes interval (which means every half an hour). Depending on that you should schedule your backward-looking search so that it finds the proper data if it's been ingested, is a bit late vs. the event time so that you can afford for some degree of latency in ingestion process (especially that you seem to be ingesting the events in batches) and need to take care in order to not overlap your results but on the other hand, since you're searching for the time difference between events, you're not "losing" any base events in your search. @gcusellotries (but it won't work properly - the timechart will fill missing 10-minute segments with values of 0 so the overall count will be just a count of the 10-minute segments within the search timerange _if there is at least one event during that time) to align your events into 10 minute buckets. The better approach here would be index=your_index | timechart count span=10m | where count=0 This way you'll get a list of 10 minute segments during which you didn't get event a single event. But it's not exactly what you asked for because if you get one event at 1:11AM and another at 1:28AM, they are 17 minutes apart but they are counted in separate 10-minute segments so both those segments are "ok". The way to calculate "lag" between events would be to carry over the _time value from the previous event using autoregress command (or streamstats; streamstats is more versatile but harder to use; in simple case, autoregress is easier and more straightforward) index=<your_index> | autoregress _time as oldtime | eval eventlag=oldtime-_time This way you get a field called "eventlag" which tells you how much time the event was before/after previous/next one (I never remember which direction the values are copied so you need to test it and possibly reverse the eventlag calculation to (_time-oldtime) in order to not get negative values. This way you can find in which moment your eventlag was bigger than 600 (the _time is expressed in seconds).
We have two indexers, one version 8.1.5 (which will not be updated soon) and version 9.1.0.1 I see 9 has a nice feature "Ingest actions" which is exactly what I need to mask some incoming Personal I... See more...
We have two indexers, one version 8.1.5 (which will not be updated soon) and version 9.1.0.1 I see 9 has a nice feature "Ingest actions" which is exactly what I need to mask some incoming Personal Information (PI). It is coming in JSON and looks something like: \"addressLine1\":\"1234 Main Street\", I need to find some fields and remove the content. Yes I believe there are backslashes in there. I tested a regex on 9 and added to the transforms.conf and props.conf files on our 8.1.5 indexer but the rules didn't work. In one of my tests the rule caused an entire log entry to change to "999999999", not quite what I was expecting but now we know Splunk was applying the rule. This is one of my rules that had no affect: [address_masking] REGEX = (?<=\"addressLine1\":\")[^\"]* FORMAT = \"addressLine1\":\"100 Unknown Rd.\" DEST_KEY = _raw Found docs, looking at them now: Configure advanced extractions with field transforms - Splunk Documentation Can I get someone point out what is wrong with the above transform? Thanks!    
I appreciate the response @inventsekar , thank you.  I have a main index that I don't need anymore and I'd like to remove. If the index can't be deleted, can all of the data be removed from the index... See more...
I appreciate the response @inventsekar , thank you.  I have a main index that I don't need anymore and I'd like to remove. If the index can't be deleted, can all of the data be removed from the index?
 Creating additional indexed fields has its uses but it also has its drawbacks and very often speedup of your searches can be achieved in various different ways (summary indexing, report acceleration... See more...
 Creating additional indexed fields has its uses but it also has its drawbacks and very often speedup of your searches can be achieved in various different ways (summary indexing, report acceleration, datamodel acceleration). So defining indexed fields just to make search work faster may in many cases not be the best idea. Often it's enough to search your data properly to get a big increase of efficiency. Theoretically, you could define a calculated field using coalesce() and returning the indexed value if found (and Splunk migh be able to optimize the search properly) but due to how splunk search works, it might not in some cases give you big search speed improvement.
Splitting your lookup to include new fields "President", "VP", "Manager" would work but doesn't really scale if the role field has high cardinality. Here is another approach that is more scalable ... See more...
Splitting your lookup to include new fields "President", "VP", "Manager" would work but doesn't really scale if the role field has high cardinality. Here is another approach that is more scalable and can be generalized. You could make a net-new field in your lookup named role_json that would contain the mapping info of role<-->name.  Edit: Just realized your request was to not have results in the mvexpanded format I first showed. So here is an updated method.       <base_search> | lookup orgchart Org, Branch OUTPUT role_json | foreach mode=multivalue role_json [ | eval tmp_json=if( isnull(tmp_json), json_object(spath('<<ITEM>>', "Role"), spath('<<ITEM>>', "Name")), json_set(tmp_json, spath('<<ITEM>>', "Role"), spath('<<ITEM>>', "Name")) ) ] ``` capture any role_json values that are single values ``` | eval tmp_json=if( mvcount(role_json)==1, json_object(spath(role_json, "Role"), spath(role_json, "Name")), 'tmp_json' ) ``` remove role_json (no longer needed) ``` | fields - role_json ``` parse out tmp_json to table all the proper mappings ``` | spath input=tmp_json ``` remove tmp_json (no longer needed) ``` | fields - tmp_json          Directly after the lookup results would look something like this:   Then after the foreach loops. I added a few extra entries to lookup to demonstrate that this method is dynamic and doesn't need any hardcoded fieldnames to account for potential new values.   To get a new json object field into your existing lookup would look something like this (Provided its a CSV, if it is a kvstore then you would probably need to update the definition to include the new field)         | inputlookup orgchart | tojson str(Role) str(Name) output_field=role_json | outputlookup orgchart        
For example, I have location field containing AB, AC, AD. I need to sum these three locations and create a new location named AM05,  without replacing the existing AB, AC and AD. When searching for A... See more...
For example, I have location field containing AB, AC, AD. I need to sum these three locations and create a new location named AM05,  without replacing the existing AB, AC and AD. When searching for AM05, I want to see the added values, and when searching for AB, it should display the existing value !! @richgalloway 
Apart from MC or direct rest calls against your "main" components if you have forwarder monitoring enabled in your MC, you'll see a list of your forwarders which connected to your environment (the in... See more...
Apart from MC or direct rest calls against your "main" components if you have forwarder monitoring enabled in your MC, you'll see a list of your forwarders which connected to your environment (the inventory is dynamically updated based on the version reported by UF to _internal).
Hi @raysonjoberts, using a lookup as the one you described, please try something like this: | inputlookup <your_lookup> | stats values(eval(if(Role="President",Name,""))) AS President value... See more...
Hi @raysonjoberts, using a lookup as the one you described, please try something like this: | inputlookup <your_lookup> | stats values(eval(if(Role="President",Name,""))) AS President values(eval(if(Role="VP",Name,""))) AS VP values(eval(if(Role="Manager",Name,""))) AS Manager BY Org Branch Ciao. Giuseppe
That's due to how Splunk searches its indexes. Unless the field is indexed and properly configured or you're searching with wildcards, Splunk will try to find the exact value you're searching for in ... See more...
That's due to how Splunk searches its indexes. Unless the field is indexed and properly configured or you're searching with wildcards, Splunk will try to find the exact value you're searching for in its index files. For example. I'm searching my home environment for index="winevents" EventCode=7040 A fairly simple search. When I look into job inspect and get the job log I see how Splunk executes the search against its indexed data: 01-12-2024 18:06:20.986 INFO UnifiedSearch [1919978 searchOrchestrator] - Expanded index search = (index="winevents" EventCode=7040) 01-12-2024 18:06:20.986 INFO UnifiedSearch [1919978 searchOrchestrator] - base lispy: [ AND 7040 index::winevents ] As you can see, Splunk didn't optimize the search (because there wasn't much to optimze - the search was very simple) but the resulting lispy search is looking literally for the value "7040" with metadata field of index equal to winevents (actually index is treated a bit different than other indexed fields but for us here it doesn't matter). Only after finding those events that do have the "7040" anywhere within their body, Splunk will try to parse out the EventCode field from them and will try to match that value (possibly numerically) to your argument.