All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Honestly - I have no idea what those tables are supposed to represent. I understand that there are two separate "zones" and you have some servers in them. How many servers and what roles do they hav... See more...
Honestly - I have no idea what those tables are supposed to represent. I understand that there are two separate "zones" and you have some servers in them. How many servers and what roles do they have? Is there any connectivity between those zones or are they completely air-gapped? If so then how do you handle licensing? What do you mean by "implement high availability"? On which layer? Architecting an environment from scratch is usually a relatively big and important task so while this forum is a good place for asking general architecture-related questions it's not a replacement for the work of properly trained Splunk Pre-sales team or Splunk Partner engineers.
HI @spl_unker  we are facing a similar issue what you faced a year ago. Our short description is also getting truncated after 80 character and when i checked the code snippet it has same details as y... See more...
HI @spl_unker  we are facing a similar issue what you faced a year ago. Our short description is also getting truncated after 80 character and when i checked the code snippet it has same details as you shown in your post. FIELD_SEPARATOR = "||" INDEX_LENGTH = 80 DID YOU FOUND THE ANSWER FOR THIS. IS THIS HAS ANYTHING TO DO WITH THE INDEX_LENGTH. Should we change the index length and redeploy. Your answer on this will be highly appreciated  
OK. If it's just for testing the functionality, I won't be bugging you about it too much Just remember that apart from very specific cases index-time extractions are best avoided. But back to th... See more...
OK. If it's just for testing the functionality, I won't be bugging you about it too much Just remember that apart from very specific cases index-time extractions are best avoided. But back to the point - if you want to extract a field from a previously extracted field, you need to have two separate transforms and make sure they are triggered in a proper order. So you need to first define a transform which extracts a field (or set of fields) from raw data. And then define another transform which extracts your field from an already extracted field. As a bonus you might (if you don't need it indexed) add yet another transform to "delete" (by setting it to null() using INGEST_EVAL) the field extracted in the first step. Example: transforms.conf: [test_extract_payload] REGEX = payload:\s"([^"]+)" FORMAT = payload::$1 WRITE_META = true [test_extract_site] REGEX = site:\s(\S)+ FORMAT = site::$1 WRITE_META = true SOURCE_KEY = payload props.conf: [my_sourcetype] TRANSFORMS-extract-site-from-payload = test_extract_payload, test_extract_site  This way you'll get your site field extracted from an event containing payload: "whatever whatever site: site1 whatever" but not from just "whatever whatever site: site1 whatever" or payload: "whatever whatever" site: site1
Ok, I understand but you don't know if an other way exists ?  For example , modify the limits.conf in this way :  or try to work with transforms.conf :    What do you think ?    Thank y... See more...
Ok, I understand but you don't know if an other way exists ?  For example , modify the limits.conf in this way :  or try to work with transforms.conf :    What do you think ?    Thank you so much   
| where _time - strptime(startDateTime,"%FT%T%Z") > 4*60*60 and isResolved="False"
Howdy all, Perhaps someone can help me to remember the SPL query that lists out the datasets as fields in the data model. It's something like:   | datamodel SomeDataModel accelerate....(i don... See more...
Howdy all, Perhaps someone can help me to remember the SPL query that lists out the datasets as fields in the data model. It's something like:   | datamodel SomeDataModel accelerate....(i dont remember the rest and cant find the relevant notes I had)   Thanks in advance.
Hello Everyone, Recently I have installed Splunk db connect app (3.16.0) in my Splunk heavy forwarder (9.1.1). As per the documentation I have installed jre and installed the my sql add on. And crea... See more...
Hello Everyone, Recently I have installed Splunk db connect app (3.16.0) in my Splunk heavy forwarder (9.1.1). As per the documentation I have installed jre and installed the my sql add on. And created Identities, connections and inputs. But when I check for the data it is not getting ingested. So I enabled the debug mode and checked the logs and got the hec token error. But the hec token is configured in inputs.conf file and same I could see in the Splunk web gui under Data inputs -> HTTP event collector. could you please help if anyone faced this error before? Error: ERROR org.easybatch.core.job.BatchJob - Unable to write records java.io.IOException: There are no Http Event Collectors available at this time. ERROR c.s.d.s.dbinput.recordwriter.CheckpointUpdater - action=skip_checkpoint_update_batch_writing_failed java.io.IOException: There are no Http Event Collectors available at this time.   ERROR c.s.d.s.task.listeners.RecordWriterMetricsListener - action=unable_to_write_batch java.io.IOException: There are no Http Event Collectors available at this time.
It does depend on what values you have in your 25k dropdown list, but if we assume that the list is generated dynamically with some sort of search, your search could include a filter which is based o... See more...
It does depend on what values you have in your 25k dropdown list, but if we assume that the list is generated dynamically with some sort of search, your search could include a filter which is based on a token from a much smaller dropdown so that you can limit your results to under 1000.
Dear All, I want to setup an alert in an event. The event contains three timestamps, New Event time, Last update, and startDate time. So these are the logs coming from MS365.  I want the Alert if o... See more...
Dear All, I want to setup an alert in an event. The event contains three timestamps, New Event time, Last update, and startDate time. So these are the logs coming from MS365.  I want the Alert if one of the field "Incident Resolved = False" is satisfied even after 4 hours of startDate time. So we receive first event at startDate  but we don't want an alert until 4 hours of startDate .   startDateTime: 2024-07-01T09:00:00Z        
I am trying to test the Index Time field extraction,  and want to know how to refine the field extraction using source_key Keyword.   Then how can i refine my Field extraction if i cant use the SO... See more...
I am trying to test the Index Time field extraction,  and want to know how to refine the field extraction using source_key Keyword.   Then how can i refine my Field extraction if i cant use the SOURCE_KEY twice?
Our network device data sends data to a Syslog server and then up to our splunk instance.  I have a few TAs that I’ve requested our SysAdmin team install on my behalf so the logs can be parsed o... See more...
Our network device data sends data to a Syslog server and then up to our splunk instance.  I have a few TAs that I’ve requested our SysAdmin team install on my behalf so the logs can be parsed out.  However, the TAs aren’t parsing out the data, and furthermore, the network device logs come into the source type of “syslog” rather than the sourcetype in the respective TAs.  Where do I need to look or have the SysAdmins look at? (I’m just a power user). 
I'm not sure that I'm understand correctly. Can you explain me more please that what's you have in mind ?  Thank you so much
Please share your full search so we might be able to determine why you are not getting any results.
No. Splunk Cloud "underneath" is essentially the same indexing and searching machinery embedded within some additional management layer. So the delete command works exactly the same - it masks the da... See more...
No. Splunk Cloud "underneath" is essentially the same indexing and searching machinery embedded within some additional management layer. So the delete command works exactly the same - it masks the data as "unreachable". It doesn't change anything else - the data has already been ingested so it counts against your license entitlement.
If you're using the timechart command, it generates zero count for periods when there is no values. Otherwise you need to use this approach https://www.duanewaddle.com/proving-a-negative/
Really? You expect your users to search through 25000 choices from your dropdown? You might be better off (or at least your users might be), using another dropdown the provide a filter for the 25000 ... See more...
Really? You expect your users to search through 25000 choices from your dropdown? You might be better off (or at least your users might be), using another dropdown the provide a filter for the 25000 list, e.g. values beginning with A, values beginning with B, etc.?
@ITWhisperer Thank you for your response.  But it did not work.  I don't get any results.
@ITWhispererI assume that 4624 means windows eventcode so they are standard windows event logs. @sujaldThe easiest approach is indeed the one shown by @ITWhisperer but it will show you number of ... See more...
@ITWhispererI assume that 4624 means windows eventcode so they are standard windows event logs. @sujaldThe easiest approach is indeed the one shown by @ITWhisperer but it will show you number of logins aligned to 10-minute periods. So if someone logged in every minute from 10:13 till 10:26, you will get two separate "buckets" of logins - one starting at 10:10, another at 10:20. If you want a moving window count, you'll need to employ the streamstats command with time_window=10m.
  Hello, I have a problem with the dropdown menu limit which displays a maximum of 1000 values. I need to display a list of 22000 values ​​and I don't know how to do it. Thank you so much  ... See more...
  Hello, I have a problem with the dropdown menu limit which displays a maximum of 1000 values. I need to display a list of 22000 values ​​and I don't know how to do it. Thank you so much   
Hi team,   Currently we are implemented the Standalone below servers, Zone-1 Environment Server Name IP Splunk Role DEV L4     Search Head+ Indexer QA L4   ... See more...
Hi team,   Currently we are implemented the Standalone below servers, Zone-1 Environment Server Name IP Splunk Role DEV L4     Search Head+ Indexer QA L4     Search Head + Indexer L4     Deployment Server   Zone-2         Environment Server Name IP Splunk Role DEV L4     Search Head + Indexer QA L4     Search Head + Indexer   In our environment, only 2 Indexer + Search head server is there on the same instance   How to Implement the High Availability Servers?   please help me the process.