All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There are more than 1 million events indexed with the same timestamp - February 9, 2023 15:50:50 UTC. Double-check the inputs.conf and props.conf settings to ensure events are being onboarded correc... See more...
There are more than 1 million events indexed with the same timestamp - February 9, 2023 15:50:50 UTC. Double-check the inputs.conf and props.conf settings to ensure events are being onboarded correctly. Searching this data will be a challenge, if it can be done at all.  Add index, source, sourcetype, and host fields to the base query to narrow the scope of the search as much as possible.
Restating the requirements does not explain them. @Muthu_Vinith wrote: For example, I have location field containing AB, AC, AD. I need to sum these three locations and create a new location nam... See more...
Restating the requirements does not explain them. @Muthu_Vinith wrote: For example, I have location field containing AB, AC, AD. I need to sum these three locations and create a new location named AM05,  without replacing the existing AB, AC and AD. You have that.  See the following example query   | makeresults format=csv data="location,cap,login AA01,10,5 AB02,6,0 AC03,10,0" | appendpipe [stats sum(cap) as cap, sum(login) as login | eval location="AM05"] | table location cap login @Muthu_Vinith wrote: When searching for AM05, I want to see the added values, and when searching for AB, it should display the existing value !! The AM05 location doesn't exist until this search runs.  Therefore, you can't search for AM05.  
@PickleRick , What should be the approach taken for data landing in Cloud through modular inputs and not from any UF/HF to export out?
Can you use a drilldown to set a token and use that token in a URL? If I understand the OP correctly, yes. Here is what I did today and found this post trying to solve this problem. I have a studio... See more...
Can you use a drilldown to set a token and use that token in a URL? If I understand the OP correctly, yes. Here is what I did today and found this post trying to solve this problem. I have a studio dashboard that lists my installed apps in a table. An app may have a hyperlink and I want to navigate to the link from the dashboard. (I simplified this search from what is shown in the image, eliminating inputs, etc.) | rest splunk_server=local /services/apps/local | rename attribution_link as URL | table title label description author version build URL | eval URL=replace(URL,"https|http\:\/\/","") I can now click the URL column and navigate in a new tab to the URL. The URL is set by a token, then another drilldown opens the link. I didn't see a way in the GUI to add two token types, so I used copy/paste to add a 2nd drilldown code segment. The first segment for drilldown one (drilldown.setToken) is in blue, and the 2nd segment (drilldown.customUrl) is in red. lines 22 - 41: "eventHandlers": [ { "type": "drilldown.setToken", "options": { "tokens": [ { "token": "url", "key": "row.URL.value" } ] } }, { "type": "drilldown.customUrl", "options": { "url": "https://$url$", "newTab": true } } ], At first it didn't work. But then I found the drilldown option was set to "false." After setting that to "true" it worked as intended. Hurray! line 5: "drilldown": "true",   I hope that helps. Happy Splunking! 
Hi Team, We have SH Cluster as below . 3 SHs members,1 Deployer. We have few alerts keep on firing with splunk@<> sender even though we configured our customised sender and also few alerts were di... See more...
Hi Team, We have SH Cluster as below . 3 SHs members,1 Deployer. We have few alerts keep on firing with splunk@<> sender even though we configured our customised sender and also few alerts were disabled but still they are firing from same one SH cluster members and from other 2 not(which is expected). Raised multiple vendor cases but no help. Can someone help 
This is a fundamental problem with the data badly ingested into Splunk. Splunk returns results in reverse chronological order so it needs to be able to sort the results properly based on the _origin... See more...
This is a fundamental problem with the data badly ingested into Splunk. Splunk returns results in reverse chronological order so it needs to be able to sort the results properly based on the _original_ value of the _time field. (afterwards the _time field can be rewritten during the search pipeline and it won't affect the result order). If you have several hundred thousand events indexed at the same point in time, Splunk cannot sort them due to memory constraints. It's not a problem with the search as such but it's a problem with the data - fix your data onboarding.
Hi @Marcie.Sirbaugh, This is what I was able to find in AppD Docs: https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/install-app-server-agents/java-agent/install-the-java-agen... See more...
Hi @Marcie.Sirbaugh, This is what I was able to find in AppD Docs: https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/install-app-server-agents/java-agent/install-the-java-agent/agent-installation-by-java-framework/mule-esb-startup-settings You can also try contacting your AppD Rep, or even Professional services for a further investigation.
It seems you're trying to do xyseries - transform a series of values into a x-y chart. The problem with this is that you can only have one field on each axis and you want two fields on one of them. ... See more...
It seems you're trying to do xyseries - transform a series of values into a x-y chart. The problem with this is that you can only have one field on each axis and you want two fields on one of them. But fear not, you can always do a trick of "combine and then split". <your_search> | eval orgbranch=Org.":".Branch | xyseries orgbranch Role Name | eval Org=mvindex(split(orgbranch,":"),0) | eval Branch=mvindex(split(orgbranch,":"),1) | fields - orgbranch
did you fix it ? 
I read many articles about it but no one knows how to fix it.  so how can I fix it?  Error in 'IndexScopedSearch': The search failed. More than 1000000 events were found at time 1675957850.
You don't need ingest actions to mask your data. You can either use SEDCMD functionality or properly crafted TRANSFORM. There are two things wrong with your TRANSFORM. 1. Your regex does not match... See more...
You don't need ingest actions to mask your data. You can either use SEDCMD functionality or properly crafted TRANSFORM. There are two things wrong with your TRANSFORM. 1. Your regex does not match properly. Use https://regex101.com/ to test your regexes 2. The REGEX part of the TRANSFORM definition specifies a regex which must match for the event to be processed by the transform (and possibly captures parts of it) but the DEST_KEY and FORMAT define whole contents of the resulting field. So if you do DEST_KEY=_raw and match the event by REGEX, _whole event_ will be overwritten with what you specify as FORMAT, not just the matched part, if the REGEX matches a part of the event. In other words if you did REGEX=. FORMAT=aaaaa DEST_KEY=_raw your transform would match every non-empty event since the REGEX matches any character but it would overwrite the whole event, not just one character, to the string "aaaaa".
How does this look? [address_masking] REGEX = (\\\"addressLine1\\\":\\\")([^\\\"]+)(\\\") FORMAT = $1(masked)$3
1. McAfee is a brand. There are several products which bear this name (like McAfee EDR, McAfee Web Gateway...); BTW, if I remember correctly, it's Trellix now. You're probably talking about one parti... See more...
1. McAfee is a brand. There are several products which bear this name (like McAfee EDR, McAfee Web Gateway...); BTW, if I remember correctly, it's Trellix now. You're probably talking about one particular McAfee solution. 2. Firstly make sure that you're ingesting the events from this solution into your Splunk infrastructure. 3. Typically if there is a "rule" which is triggered, event generated from such rule contains the rule identifier in its body. Most often it's just enough to search for this identifier.
Great! Thanks @PickleRick , I'll give that a shot!
If you already ingested some data into this index and you don't need/want it in your instance anymore you can set a short retention period so that data is quickly rolled out and removed from the inde... See more...
If you already ingested some data into this index and you don't need/want it in your instance anymore you can set a short retention period so that data is quickly rolled out and removed from the index. It's the easiest and most elegant way.
Hi!   I want to write a query that will show me all the events that jumped because of a certain rule that I set in Mcafee. How do I do this? thank you  
Any chance this is available now?
OK. Answer to this question is not that straightforward. But often the original question might not be exactly what you need to get from your data. Anyway. The first and, let's be honest, worst idea... See more...
OK. Answer to this question is not that straightforward. But often the original question might not be exactly what you need to get from your data. Anyway. The first and, let's be honest, worst idea, would be to create a real-time alert with a window of 10 minutes and trigger an alert when there are no results for your search. But this is a very bad idea! Using real-time searches (and alerts) is generally a bad practice since they hog up resources. So you should be checking for the results returned by historical searches. You should look at your data "backwards" and verify whether there was an event when there should be one. Now, your wording is a bit confusing and ambiguous. Firstly, what does "every 1 hour with 30 minutes interval" mean? Either you do something every 1 hour or with 30 minutes interval (which means every half an hour). Depending on that you should schedule your backward-looking search so that it finds the proper data if it's been ingested, is a bit late vs. the event time so that you can afford for some degree of latency in ingestion process (especially that you seem to be ingesting the events in batches) and need to take care in order to not overlap your results but on the other hand, since you're searching for the time difference between events, you're not "losing" any base events in your search. @gcusellotries (but it won't work properly - the timechart will fill missing 10-minute segments with values of 0 so the overall count will be just a count of the 10-minute segments within the search timerange _if there is at least one event during that time) to align your events into 10 minute buckets. The better approach here would be index=your_index | timechart count span=10m | where count=0 This way you'll get a list of 10 minute segments during which you didn't get event a single event. But it's not exactly what you asked for because if you get one event at 1:11AM and another at 1:28AM, they are 17 minutes apart but they are counted in separate 10-minute segments so both those segments are "ok". The way to calculate "lag" between events would be to carry over the _time value from the previous event using autoregress command (or streamstats; streamstats is more versatile but harder to use; in simple case, autoregress is easier and more straightforward) index=<your_index> | autoregress _time as oldtime | eval eventlag=oldtime-_time This way you get a field called "eventlag" which tells you how much time the event was before/after previous/next one (I never remember which direction the values are copied so you need to test it and possibly reverse the eventlag calculation to (_time-oldtime) in order to not get negative values. This way you can find in which moment your eventlag was bigger than 600 (the _time is expressed in seconds).
We have two indexers, one version 8.1.5 (which will not be updated soon) and version 9.1.0.1 I see 9 has a nice feature "Ingest actions" which is exactly what I need to mask some incoming Personal I... See more...
We have two indexers, one version 8.1.5 (which will not be updated soon) and version 9.1.0.1 I see 9 has a nice feature "Ingest actions" which is exactly what I need to mask some incoming Personal Information (PI). It is coming in JSON and looks something like: \"addressLine1\":\"1234 Main Street\", I need to find some fields and remove the content. Yes I believe there are backslashes in there. I tested a regex on 9 and added to the transforms.conf and props.conf files on our 8.1.5 indexer but the rules didn't work. In one of my tests the rule caused an entire log entry to change to "999999999", not quite what I was expecting but now we know Splunk was applying the rule. This is one of my rules that had no affect: [address_masking] REGEX = (?<=\"addressLine1\":\")[^\"]* FORMAT = \"addressLine1\":\"100 Unknown Rd.\" DEST_KEY = _raw Found docs, looking at them now: Configure advanced extractions with field transforms - Splunk Documentation Can I get someone point out what is wrong with the above transform? Thanks!    
I appreciate the response @inventsekar , thank you.  I have a main index that I don't need anymore and I'd like to remove. If the index can't be deleted, can all of the data be removed from the index... See more...
I appreciate the response @inventsekar , thank you.  I have a main index that I don't need anymore and I'd like to remove. If the index can't be deleted, can all of the data be removed from the index?