All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The requirement is a bit imprecise. Do you mean when a 5-days rolling window average drops by 10% from one day to another or do you want to compare average number of hits in 5 days versus an average ... See more...
The requirement is a bit imprecise. Do you mean when a 5-days rolling window average drops by 10% from one day to another or do you want to compare average number of hits in 5 days versus an average from preceeding 5 days (so you want to calculate two values from 10 days in total) or maybe something else?
As usual, I advise against using the default date_* fields. Firstly, they don't have to be present in every event so if you get the habit of relying on them you might be unpleasantly surprised. Seco... See more...
As usual, I advise against using the default date_* fields. Firstly, they don't have to be present in every event so if you get the habit of relying on them you might be unpleasantly surprised. Secondly, they correspond to the original value of the original timestamp so it might not be aligned to your timezone. I'd go with <base search> | eval hour=strftime(_time,"%H") | where NOT (hour>=2 AND hour<=3 AND in(USA,"Washington","New York",and so on))  
You can't load balance syslog traffkc without an external load-balancer (and even then syslog traffic doesn't load balance very well as load balancers typically don't speak syslog; you can do your ow... See more...
You can't load balance syslog traffkc without an external load-balancer (and even then syslog traffic doesn't load balance very well as load balancers typically don't speak syslog; you can do your own rsyslog-based load balancer but then you're introducing another SPOF). You can do a relatively-high-availability setup with an active-standby setup and a floating IP using keepalived or similar solution. It's still not 100% foolproof and you'll have data loss when there is a failure of the primary node before the IP falls over to secondary node (and tcp connections time out/get reset).
See the fieldformat command. It lets you tell Splunk to process the data as it was but display in a different (usually more human-readable) form.
Hi @kenbaugher, did you tried with the chart command (https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/Chart)? please adapt this example to your use case: index=your_index | char... See more...
Hi @kenbaugher, did you tried with the chart command (https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/Chart)? please adapt this example to your use case: index=your_index | chart values("End Time")AS End_Time OVER Date BY System One additional hint: don't use spaces in the field names. Ciao. Giuseppe
Hi @SplunkDash, at first, why are you using a lookup is you must use a timestamp? a lookup is a static table. if you need to associate a timestamp to each row, it's easier to store these csv data i... See more...
Hi @SplunkDash, at first, why are you using a lookup is you must use a timestamp? a lookup is a static table. if you need to associate a timestamp to each row, it's easier to store these csv data in an index. Anyway, you can also create a time based lookup, but I never used this option because, in this situation, I prefer to use the previous solution. At least, directly answering to your question, you should transform the timestamp fields in epochtime, using "eval strptime", to elaborate the timestamp and compare with a time picker. Ciao. Giuseppe
Hi @edalbanese , I don't usually work with OpenAI, even if I'll do it in the next future for a new customer. But I saw two italian Splunk Sales Engineers that did exactly what you are searching wit... See more...
Hi @edalbanese , I don't usually work with OpenAI, even if I'll do it in the next future for a new customer. But I saw two italian Splunk Sales Engineers that did exactly what you are searching with this add on. They showed this concept app in a meeting of the Italia Splunk User Group. Ciao. Giuseppe
Hello, I have a lookup table called account_audit.csv and have a timestamp field UPDATE_DATE=01/05/24 04:49:26. How can I find all events within that lookup with UPDATE_DATE  >= 01/25/24. Any recomm... See more...
Hello, I have a lookup table called account_audit.csv and have a timestamp field UPDATE_DATE=01/05/24 04:49:26. How can I find all events within that lookup with UPDATE_DATE  >= 01/25/24. Any recommendations will be highly appreciated. Thank you!   
What Splunk says to avoid is having any Splunk instance listen on a TCP/UDP port for syslog data.  Whenever Splunk restarts any data sent to the port will be lost until Splunk comes back up, which co... See more...
What Splunk says to avoid is having any Splunk instance listen on a TCP/UDP port for syslog data.  Whenever Splunk restarts any data sent to the port will be lost until Splunk comes back up, which could be minutes.  A dedicated syslog receiver is much faster to restart. The problem could be alleviated somewhat by fronting the Splunk TCP port with a load balancer. If you plan to refactor, consider putting multiple SC4S instances (they're Docker containers) close to the syslog sources.
First, thank you for clearly illustrating input data and desired output.  Note that join is a performance killer and best avoided; in this case it is an overkill. If I decipher your requirement from... See more...
First, thank you for clearly illustrating input data and desired output.  Note that join is a performance killer and best avoided; in this case it is an overkill. If I decipher your requirement from the complex SPL correctly, all you want is a correlation between INFO and ERROR logs to output exceptions correlated with failed claim, file, etc.  Whereas it is not difficult to extract claim number from both types of logs given the illustrated format, an easier correlation field is SessionID because they appear in both types in the exact same form. Additionally, there should be no need to extract clmNumber and confirmationNumber because they are automatically extracted.  the name field is garbled because of unquoted white spaces. This is a simpler search that should satisfy your requirement:   index="myindex" ("/app1/service/site/upload failed" AND "source=Web" AND "confirmationNumber=ND_*") OR ("Exception from executeScript") | rex "\bname=(?<name>[^,]+)" ```| rex "clmNumber=(?<ClaimNumber>[^,]+)" | rex "confirmationNumber=(?<SubmissionNumber>[^},]+)" | rex "contentType=(?<ContentType>[^},]+)" ``` | rex "(?<SessionID>\[http-nio-8080-exec-\d+\])" | rex "Exception from executeScript: (?<Exception>[^:]+)" | fields clmNumber confirmationNumber name Exception SessionID | stats min(_time) as _time values(*) as * by SessionID   Your sample logs should give SessionID _time Exception clmNumber confirmationNumber name [http-nio-8080-exec-200] 2024-02-15 09:41:16.762 0115100953 Document not found - Tristian CLAIM #99900470018 PACKAGE.pdf 99900470018 ND_52233-02152024 Tristian CLAIM #99900470018 PACKAGE.pdf [http-nio-8080-exec-202] 2024-02-15 09:07:47.769 0115100898 Duplicate Child Exception - ROAMN Claim # 99900468430 Invoice.pdf already exists in the location. 99900468430 ND_50249-02152024 ROAMN Claim # 99900468430 Invoice.pdf Of course you can remove SessionID from display and rearrange field order. You can play with the following emulation and compare with real data   | makeresults | eval data = split("2024-02-15 09:07:47,770 INFO [com.mysite.core.app1.upload.FileUploadWebScript] [http-nio-8080-exec-202] The Upload Service /app1/service/site/upload failed in 0.124000 seconds, {comments=xxx-123, senderCompany=Company1, source=Web, title=Submitted via Site website, submitterType=Others, senderName=ROMAN , confirmationNumber=ND_50249-02152024, clmNumber=99900468430, name=ROAMN Claim # 99900468430 Invoice.pdf, contentType=Email} 2024-02-15 09:07:47,772 ERROR [org.springframework.extensions.webscripts.AbstractRuntime] [http-nio-8080-exec-202] Exception from executeScript: 0115100898 Duplicate Child Exception - ROAMN Claim # 99900468430 Invoice.pdf already exists in the location. --- --- --- 2024-02-15 09:41:16,762 INFO [com.mysite.core.app1.upload.FileUploadWebScript] [http-nio-8080-exec-200] The Upload Service /app1/service/site/upload failed in 0.138000 seconds, {comments=yyy-789, senderCompany=Company2, source=Web, title=Submitted via Site website, submitterType=Public Adjuster, senderName=Tristian, confirmationNumber=ND_52233-02152024, clmNumber=99900470018, name=Tristian CLAIM #99900470018 PACKAGE.pdf, contentType=Email} 2024-02-15 09:41:16,764 ERROR [org.springframework.extensions.webscripts.AbstractRuntime] [http-nio-8080-exec-200] Exception from executeScript: 0115100953 Document not found - Tristian CLAIM #99900470018 PACKAGE.pdf", " ") | mvexpand data | rename data AS _raw | rex "^(?<_time>\S+ \S+)" | eval _time = strptime(_time, "%F %T,%3N") | extract ``` the above emulates (index="myindex" "/app1/service/site/upload failed" AND "source=Web" AND "confirmationNumber=ND_*") OR (index="myindex" "Exception from executeScript") ``` | rex "\bname=(?<name>[^,]+)" ```| rex "clmNumber=(?<ClaimNumber>[^,]+)" | rex "confirmationNumber=(?<SubmissionNumber>[^},]+)" | rex "contentType=(?<ContentType>[^},]+)" ``` | rex "(?<SessionID>\[http-nio-8080-exec-\d+\])" | rex "Exception from executeScript: (?<Exception>[^:]+)" | fields clmNumber confirmationNumber name Exception SessionID | stats min(_time) as _time values(*) as * by SessionID    
(I was just trying to clarify @bowesmana's syntax and have not related to the original question.)  It is always a good practice to illustrate sample/mock data at the beginning.  Now, the sample JSON ... See more...
(I was just trying to clarify @bowesmana's syntax and have not related to the original question.)  It is always a good practice to illustrate sample/mock data at the beginning.  Now, the sample JSON needs further clarification in relationship to your OP. Is this snippet the field3 you referred to in OP?  If not, which one is field3? This snippet contains a key "event.ResourceAttributes.Resource Name".  I assume that this is "Resource Name" you referred to in OP.  Is this correct? Which fields correspond to "Attribute Name" and "ID" in OP? Importantly, when illustrating structured data like JSON, make sure your illustration is compliant.  I tried to reconstruct a compliant JSON from your illustration.  This is what I come up with:   {"event": { "AccountId": "xxxxxxxxxx", "CloudPlatform": "CloudProvider", "CloudService": "Service", "ResourceAttributes": {"key1": "value1", "key2": "value2", "key3": "value3", "key4": [{"key": "value", "key": "value"}], "Resource Name": "name-resource-121sg6fe", "etc": "etc"} } }   Does this truly reflect your original data structure?  If the snippet is field3, here is an emulation to check if my understanding is correct:   | makeresults | eval field3 = "{\"event\": { \"AccountId\": \"xxxxxxxxxx\", \"CloudPlatform\": \"CloudProvider\", \"CloudService\": \"Service\", \"ResourceAttributes\": {\"key1\": \"value1\", \"key2\": \"value2\", \"key3\": \"value3\", \"key4\": [{\"key\": \"value\", \"key\": \"value\"}], \"Resource Name\": \"name-resource-121sg6fe\", \"etc\": \"etc\"} } }" | spath input=field3 | fields - field3 _*   event.AccountId event.CloudPlatform event.CloudService event.ResourceAttributes.Resource Name event.ResourceAttributes.etc event.ResourceAttributes.key1 event.ResourceAttributes.key2 event.ResourceAttributes.key3 event.ResourceAttributes.key4{}.key xxxxxxxxxx CloudProvider Service name-resource-121sg6fe etc value1 value2 value3 value value Is this close? Also, if you have a specific output format in mind, you should illustrate what the output should look like when using this sample data.
Ive read that too and it was kinda ambiguous to me. The same linux server is running an instance of Splunk UF as well as rsyslog. The latter is writing to log files that are listed in the formers inp... See more...
Ive read that too and it was kinda ambiguous to me. The same linux server is running an instance of Splunk UF as well as rsyslog. The latter is writing to log files that are listed in the formers inputs.conf. If this is the exact scenario that Splunk says is not good to, they should word it a bit better. The syslog daemon would have to write to logs files across the network.. or the UF would have to reach out to read the log files remotely. Either of those ideas seemed like a bad thing. In any case.. with our impending colo changes I'm going to have to refactor these hosts. So if separating the services is the best path forward that's what we'll do.   But I will still need to LB a pair of forwarders.. unless someone knows how well the UF and/or HF scale hardware-wise?  Like if I throw 8 cores, a fistfull of memory and 25gigs of network throughput, can a single Splunk host process enough data to keep up with say 100k events per day?
Splunk recommends not using a forwarder as a syslog receiver because it can lead to data loss. The preferred method is to use a dedicated syslog server (syslog-ng or rsyslog) to write syslog events ... See more...
Splunk recommends not using a forwarder as a syslog receiver because it can lead to data loss. The preferred method is to use a dedicated syslog server (syslog-ng or rsyslog) to write syslog events to disk files and have a UF monitor those files and forward the contents to your indexers.  Another option is Splunk Connect for Syslog (SC4S), which wraps around syslog-ng and eliminates the need for a forwarder.
You have the right idea. Here's how to do that in SPL. index=some_index "some search criteria" | eval PODNAME=case(in(SERVERNAME, {list of servernames}), "ONTARIO", in(SERVERNAME... See more...
You have the right idea. Here's how to do that in SPL. index=some_index "some search criteria" | eval PODNAME=case(in(SERVERNAME, {list of servernames}), "ONTARIO", in(SERVERNAME, {list of servernames}), "GEORGIA", 1==1, "unknown" ) | timechart span=30min count by PODNAME Now, when servers are added or removed you just need to edit the lookup file rather than change SPL.  I recommend the Splunk App for Lookup File Editing to modify CSV files. There's a better way, though, since the above doesn't scale well with many locations and may become hard to maintain if the code is used in many places. Use a lookup table. Create a CSV file with SERVERNAME and PODNAME columns then use the lookup to map server name to location. index=some_index "some search criteria" | lookup serverlocation.csv SERVERNAME OUTPUT PODNAME | timechart span=30min count by PODNAME
I knew HF's could handle custom ports but our team has been limited to UFs for over a year now. I'd be tickled pink to ditch rsyslog entirely...  I should have perhaps asked a different question. Rep... See more...
I knew HF's could handle custom ports but our team has been limited to UFs for over a year now. I'd be tickled pink to ditch rsyslog entirely...  I should have perhaps asked a different question. Replacing syslog with a Heavy Forwarder.
@scelikok Thank you.  index=<myindex> |search USA="Washington" NOT date_hour IN (2,3) is not working it's only filtering washington not excluding events between 2-3 I also want the remanining va... See more...
@scelikok Thank you.  index=<myindex> |search USA="Washington" NOT date_hour IN (2,3) is not working it's only filtering washington not excluding events between 2-3 I also want the remanining values reported all the time.  
As @scelikok pointed out, it's not the scope of the UFs. Maybe you could change from UFs to HFs and have the job done? Because with HFs, you could receive the data from TCP/UDP ports, make some trans... See more...
As @scelikok pointed out, it's not the scope of the UFs. Maybe you could change from UFs to HFs and have the job done? Because with HFs, you could receive the data from TCP/UDP ports, make some transformations or discards or data, and then, send it to Splunk. Kind regards, Rafael Santos
We have data similar to the below and are trying to chart it with a line or bar graph similar to the chart shown that was created in excel.   Been able to do different things to calculate a duration ... See more...
We have data similar to the below and are trying to chart it with a line or bar graph similar to the chart shown that was created in excel.   Been able to do different things to calculate a duration since midnight on the date to the end time to give a consistent starting point for each, but splunk does not seem to like to chart the duration or a time stamp as they are strings.   We can chart it as a value like a unix format date but that isn't really human readable.     Date System End Time 20240209 SYSTEM1 2/9/24 10:39 PM 20240209 SYSTEM2 2/9/24 10:34 PM 20240209 SYSTEM3 2/9/24 11:08 PM 20240212 SYSTEM1 2/12/24 10:37 PM 20240212 SYSTEM2 2/12/24 10:19 PM 20240212 SYSTEM3 2/12/24 11:10 PM 20240213 SYSTEM1 2/13/24 11:19 PM 20240213 SYSTEM2 2/13/24 10:17 PM 20240213 SYSTEM3 2/13/24 11:00 PM 20240214 SYSTEM1 2/14/24 10:35 PM 20240214 SYSTEM2 2/14/24 10:23 PM 20240214 SYSTEM3 2/14/24 11:08 PM 20240215 SYSTEM1 2/15/24 10:36 PM 20240215 SYSTEM2 2/15/24 10:17 PM 20240215 SYSTEM3 2/15/24 11:03 PM
Our splunk implementation has SERVERNAME as a preset field, and there are servers in different locations, but there is no location field. How can I count errors by location? I envision something like... See more...
Our splunk implementation has SERVERNAME as a preset field, and there are servers in different locations, but there is no location field. How can I count errors by location? I envision something like this but cannot find a way to implement: index=some_index "some search criteria" | eval PODNAME="ONTARIO" if SERVERNAME IN ({list of servernames}) | eval PODNAME="GEORGIA" if SERVERNAME IN ({list of servernames}) | timechart span=30min count by PODNAME Any ideas?
Hey @scelikok  Thanks for the pointer.. I realize my question might fall into a grey area, I hope others who may have insight or experience still chime in. Learning things about load balancing just t... See more...
Hey @scelikok  Thanks for the pointer.. I realize my question might fall into a grey area, I hope others who may have insight or experience still chime in. Learning things about load balancing just the UF will also be super helpful!