All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Should be "%FT%H:%M:%S.%3Q%Z".  You can always test your time format with an emulation, like | makeresults format=csv data="eqtext:EventTime 2024-07-13T16:21:31.287Z" | eval _time = strptime('eqtext... See more...
Should be "%FT%H:%M:%S.%3Q%Z".  You can always test your time format with an emulation, like | makeresults format=csv data="eqtext:EventTime 2024-07-13T16:21:31.287Z" | eval _time = strptime('eqtext:EventTime', "%FT%H:%M:%S.%3Q%Z")  
Is there anyway to fix this error, given that it is may be caused by the application?
Try this:   index=xyz Feature IN (Create, Update, Search, Health) | timechart span=1m count as TotalHits, perc90(Elapsed) by Feature | appendpipe [stats max("Total Hits: *") as * | eval _ti... See more...
Try this:   index=xyz Feature IN (Create, Update, Search, Health) | timechart span=1m count as TotalHits, perc90(Elapsed) by Feature | appendpipe [stats max("Total Hits: *") as * | eval _time = "Total Hits"] | fields - "Total Hits: *" | appendpipe [stats max("perc90(Elapsed): *") as * | eval _time = "perc90(Elapsed)"] | fields - "perc90*" | tail 2 | transpose header_field=_time column_name=Feature | where Feature != "_span"   Two additional pointers: Do not use a second search line if Feature is already available in indexed data. Do not use a separate command for time bucket if you are going to use timechart. This is my emulation:   index=_internal | rename date_second as Elapsed, log_level as Feature | eval Feature = case(Feature == "INFO", "Create", Feature == "WARN", "Health", Feature == "ERROR", "Search", true(), "Update") ``` the above emulates index=xyz Feature IN (Create, Update, Search, Health) ```   With this, the result is Feature perc90(Elapsed) Total Hits Create 59.000000000000000 1283 Health 48.700000000000000 191 Search 59 212 Update 52.000000000000000 551
Hello Splunkers!!   I have a below event and I want to parse. But the event is not parsing with time format in Splunk. Please help me to get it fix . TIME_FORMAT : %dT%H:%M:%S.%3QZ TIME_PREFIX :... See more...
Hello Splunkers!!   I have a below event and I want to parse. But the event is not parsing with time format in Splunk. Please help me to get it fix . TIME_FORMAT : %dT%H:%M:%S.%3QZ TIME_PREFIX : \<eqtext\:EventTime\> I have used the above setting but nothings works. StillI can see isse with indexed and event time. Please help me to get it fix.   Below are the raw events:   <eqtext:EquipmentEvent xmlns:eqtext="http:///FM/EqtEvent/EqtEventExtTypes/V1/1/5" xmlns:sbt="http://FM/Common/Services/ServicesBaseTypes/V1/8/4" xmlns:eqtexo="http://FM/EqtEvent/EqtEventExtOut/V1/1/5"><eqtext:ID><eqtext:Location><eqtext:PhysicalLocation><AreaID>7053</AreaID><ZoneID>33</ZoneID><EquipmentID>25</EquipmentID><ElementID>0</ElementID></eqtext:PhysicalLocation></eqtext:Location><eqtext:Description> Welder cold</eqtext:Description><eqtext:MIS_Address>6.2</eqtext:MIS_Address></eqtext:ID><eqtext:Detail><State>CAME_IN</State><eqtext:EventTime>2024-07-13T16:21:31.287Z</eqtext:EventTime><eqtext:MsgNr>7751154552301783480</eqtext:MsgNr><Severity>INFO</Severity><eqtext:OperatorID>WALVAU-SCADA-1</eqtext:OperatorID><ErrorType>TECHNICAL</ErrorType></eqtext:Detail></eqtext:EquipmentEvent></eqtexo:EquipmentEventReport>
Hi @thevikramyadav ..  As you are aware, good questions will receive better answers!  - are you confused about search factor, replication factor, etc - are you confused about SHC maintenance, supp... See more...
Hi @thevikramyadav ..  As you are aware, good questions will receive better answers!  - are you confused about search factor, replication factor, etc - are you confused about SHC maintenance, support tasks..  - are you confused about why SHC needed in first place? - are you confused about SHC and distributed searching?..  - are you confused about licensing for SHC.. or something else..    Best Regards Sekar    
Hi @eoronsaye  may i know if you are trying to install the UF package manually or thru tools like Chef, software deployment packages, etc just in case, if you have missed to check this doc: https:... See more...
Hi @eoronsaye  may i know if you are trying to install the UF package manually or thru tools like Chef, software deployment packages, etc just in case, if you have missed to check this doc: https://docs.splunk.com/Documentation/Forwarder/9.2.1/Forwarder/InstallaWindowsuniversalforwarderfromaninstaller  
Hi Team, While setting up our new remote Heavy Forwarder, we configured it to collect data from 20 universal Forwarders and Syslog devices, averaging about 30GB daily. To control network bandwidth... See more...
Hi Team, While setting up our new remote Heavy Forwarder, we configured it to collect data from 20 universal Forwarders and Syslog devices, averaging about 30GB daily. To control network bandwidth usage, we applied a maximum throughput limit of 1MBps (1024KBps) using the maxKBps setting in limits.conf on the new remote Heavy Forwarder. This setting is intended to cap the rate at which data is forwarded to our Indexers, aiming to prevent exceeding the specified bandwidth limit. However, according to Splunk documentation, this configuration doesn't guarantee that data transmission will always stay below the set maxKBps. It depends on factors such as the status of processing queues and doesn't directly restrict the volume of data being sent over the network. How can we ensure the remote HF is not exceeding the value set in maxKBps in any case. Regards VK
I have result like this     column, row 1 TotalHits: Create, 171 TotalHits: Health, 894 TotalHits: Search, 172 TotalHits: Update, 5 perc90(Elapsed): Create, 55 per... See more...
I have result like this     column, row 1 TotalHits: Create, 171 TotalHits: Health, 894 TotalHits: Search, 172 TotalHits: Update, 5 perc90(Elapsed): Create, 55 perc90(Elapsed): Health, 52 perc90(Elapsed): Search, 60 perc90(Elapsed): Update, 39       I want to convert this into   Total Hits perc90(Elapsed) Create 171 55 Update 5 52 Search 172 60 Health 894 52   What query should I use Btw, to reach the above output I used like this, even this I am not sure whether its the best way index=xyz | search Feature IN (Create, Update, Search, Health) | bin _time span=1m | timechart count as TotalHits, perc90(Elapsed) by Feature | stats max(*) AS * | transpose Basically I am trying to get the MAX of the 90th percentile and Total Hits during a time window.
Hi  Trying to install Splunk Enterprise on Windows Server 2022 with my Domain account but every time I install it, it keeps rolling back. I have checked online but keep see info around giving my Dom... See more...
Hi  Trying to install Splunk Enterprise on Windows Server 2022 with my Domain account but every time I install it, it keeps rolling back. I have checked online but keep see info around giving my Domain  account needs to have the relevant permissions. The version of Splunk Enterprise I am installing is 9.2.0.1   Can you please advise me on what permission should be granted to the domain account or if there is anything else that may be causing the rollback        
1. If you really want to brute-force your way through configs, don't just do grep -R over everything because you'll be doing - for example - searching through a whole lot of java code if you have DBC... See more...
1. If you really want to brute-force your way through configs, don't just do grep -R over everything because you'll be doing - for example - searching through a whole lot of java code if you have DBConnect installed. It's enough to do find $SPLUNK_HOME/etc -type f -name \*.conf | xargs grep "index=whatever" 2. It only finds those cases where there is an explicit index=something condition in the search. I know it's relatively uncommon, but index can be specified in another way - for example with a use of a macro. The index can also be specified with a wildcard. There are more fancy ways of dynamically specifying index to search. You won't find them this way.
Hi  @bwheelerice  recently I did similar exercise  if you have backend access  on Searchead CLI  navigate to $PLUNK_HOME/etc run command grep -Ril "index=<indexname>" it will list ... See more...
Hi  @bwheelerice  recently I did similar exercise  if you have backend access  on Searchead CLI  navigate to $PLUNK_HOME/etc run command grep -Ril "index=<indexname>" it will list wherever index=<indexname> present with location  its covers etc/apps/ and etc/users However, one problem is that you need to do it  based on your needs. At least it worked for me to find index names and replace them wherever needed. searching of index name depends on component  for ex: on deployment server you need to run in location  $PLUNK_HOME/etc/deployment-apps  on cluster manager $PLUNK_HOME/etc/manager-apps or master-apps it works for any keyword that you want to lookfor
What should I do now to solve the problem
Hi @thevikramyadav  In addtion to @PickleRick answer , below is the basic understandinf of  SH cluster  Search head cluster need minimum of 3 search heads and max 100   Group of search heads... See more...
Hi @thevikramyadav  In addtion to @PickleRick answer , below is the basic understandinf of  SH cluster  Search head cluster need minimum of 3 search heads and max 100   Group of search heads where apps, search, artifacts and jobs scheduling are same   Group of search heads replicates knowledge objects replicates search artifacts increases search accessibility   Advantages Horizontal scaling High availability No single point of failure - Deployer Centralized location to distribute apps and other configurations to search head cluster members Not participate in searches - Captain - Its a cluster member with additional responsibilities - responsible include - Scheduling jobs/searches - Coordinating alerts and alerts suppression across the cluster - Pushes the knowledge bundle to search peers(indexers) - Coordinating artifacts replication - Replicating configuration updates - Cluster members - Same as search head in single instance - Participate in searches - Load balancer (optional) - 3rd party software - Resides between users and cluster members - Replication factor - Determines the number of copies of each artifact/search result - Only artifact/search result from scheduled saves searches are replicated - Results from ad hoc searches or real time searches are not replicated - by default, schedules saves searches results are stored in - $SPLUNK_HOME/var/run/splunk/dispatch/search/ - Search peers          - These Indexers where data is searched
Hi @Anud, The add-on documentation explains how to assign icons to nodes. What have you tried so far?
Hi @MichaelBs, After receiving the data, you can use timechart as you normally would. Do you have specific questions about timechart using the sample data provided?
Hi @AliMaher, Archived .conf content is a great place to start. Behind The Magnifying Glass: How Search Works by Jeff Champagne provides a nice overview, and TSTATS and PREFIX by Richard Morgan is f... See more...
Hi @AliMaher, Archived .conf content is a great place to start. Behind The Magnifying Glass: How Search Works by Jeff Champagne provides a nice overview, and TSTATS and PREFIX by Richard Morgan is fantastic. Try searching conf.splunk.com using your favorite search engine for the term tsidx, e.g. using Google: https://www.google.com/search?q=site%3Aconf.splunk.com+tsidx 
Hi @larunrahul, You can use the rex, chart, and where commands to extract the call type, summarize the events, and filter the results, respectively: | makeresults format=csv data="_raw TXN_ID=abcd ... See more...
Hi @larunrahul, You can use the rex, chart, and where commands to extract the call type, summarize the events, and filter the results, respectively: | makeresults format=csv data="_raw TXN_ID=abcd inbound call INGRESS TXN_ID=abcd inbound call EGRESS TXN_ID=efgh inbound call INGRESS" | extract | rex "inbound call (?<call_type>[^\\s]+)" | chart count over TXN_ID by call_type | where INGRESS!=EGRESS TXN_ID EGRESS INGRESS efgh 0 1 I've used the extract command to automatically extract the TXN_ID field in the example, but if your events are already indexed, Splunk will have done that for you automatically.
It's something your IT department should take care of. Basics of networking and general sysadmin work are way out of scope of this forum.
Hi Folks,   I have two types of events that look like this Type1: TXN_ID=abcd inbound call INGRESS Type2: TXN_ID=abcd inbound call EGRESS   i want to find out how many events of each type per... See more...
Hi Folks,   I have two types of events that look like this Type1: TXN_ID=abcd inbound call INGRESS Type2: TXN_ID=abcd inbound call EGRESS   i want to find out how many events of each type per TXN_ID. If the counts per type don't match per TXN_ID, I want to out put that TXN_ID   I know that we can do stats count by TXN_ID. But how do so do that Per event type in same query?   Appreciate the help.   Thanks
How we can configure the DNS ?