All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I want to do everything at the indexer level. Any idea on how to handle this data set ? 
Could you please confirm what is the updated query ? I am using below query. No result found. Please suggest index=whatever yoursearchterms | bin _time span=1s | stats count AS TPS by _time ser... See more...
Could you please confirm what is the updated query ? I am using below query. No result found. Please suggest index=whatever yoursearchterms | bin _time span=1s | stats count AS TPS by _time service | stats max(TPS) AS "NaxTPS" min(TPS) AS "MinTPS" avg(TPS) AS "AVG TPS" by service
Hi Have you try to use bigger value than 100 in count? Maybe another option is to use this https://splunkbase.splunk.com/app/7171 ? r. Ismo
Hi have you look and try this? https://community.splunk.com/t5/Getting-Data-In/Ingesting-offline-Windows-Event-logs-from-different-systems/m-p/649515 r. Ismo
Sorry for the delay I'm still having issues I have posted my update.
I have updated tomcat.service with tools.jar but it is still not working. Environment='JAVA_OPTS=-Djava.awt.headless=true -javaagent:/opt/appdynamics/javaagent.jar -Dappdynamics.agent.applicationNam... See more...
I have updated tomcat.service with tools.jar but it is still not working. Environment='JAVA_OPTS=-Djava.awt.headless=true -javaagent:/opt/appdynamics/javaagent.jar -Dappdynamics.agent.applicationName=biznuvo_test1_node2 -Dappdynamics.agent.tierName=biznuvo_test1_node2 -Dappdynamics.agent.nodeName=biznuvo_test1_node2-i-0e42a26c009127da0 -Dappdynamics.agent.uniqueHostId=biznuvo_TEST1_node2_ip-172_31_31_131 -Xbootclasspath:/usr/lib/jvm/jdk-1.8-oracle-x64/jre/lib/ext/tools.jar'
Based on number of your log events it had been surprise if that was helped. Have you look network interface stats, if there is something weird? Was it so, that this same issue was in all your Linux... See more...
Based on number of your log events it had been surprise if that was helped. Have you look network interface stats, if there is something weird? Was it so, that this same issue was in all your Linux uf nodes? If yes then it heavily pointed to some configuration issue! Can you show your outputs.conf settings exported by btool with —debug option?
I have a field called eventtime in my logs, but the time is 19 characters long in epoch time (which goes to nanoseconds).  The field is in the middle of the events, but I want to use it as the timest... See more...
I have a field called eventtime in my logs, but the time is 19 characters long in epoch time (which goes to nanoseconds).  The field is in the middle of the events, but I want to use it as the timestamp.  However, when I, through the UI, define the TIME_PREFIX, it won't recognize it.  However, there is another field that also has epoch time, but only 10 characters.  When I use it, it works...just doesn't give me the nanoseconds.  So, it's not a syntax issue.  There are no periods in the timestamp.  How can I fix this - using the UI for testing is easier to get feedback, but if I need to modify it in props.conf, that's fine? Additional context: The data comes in in json format, but only uses single quotes.  I fixed this by using sedcmd in props.conf to swap the single quote with double quotes.  In the TIME_PREFIX box (again, in the UI), I used single quotes as double quotes didn't work (which makes sense). 'eventtime': '1707613171105400540' 'itime_t': 1707613170'  
I am using splunk cloud. As admin i created a new user but the user is yet to get an email notification with the necessary login details. Please what might be the issue
Hi @haleyh44 , here you can find the process of migration from a stand alone indexer to a clustered environment. https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Migratenon-clusteredindex... See more...
Hi @haleyh44 , here you can find the process of migration from a stand alone indexer to a clustered environment. https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Migratenon-clusteredindexerstoaclusteredenvironment Ciao. Giuseppe
Thanks! Do i need to create and designate my cluster manager first and then cluster my indexers? I am trying to figure out what i need to do first after making a new indexer and how i can cluster m... See more...
Thanks! Do i need to create and designate my cluster manager first and then cluster my indexers? I am trying to figure out what i need to do first after making a new indexer and how i can cluster my two indexers. Also, same with my new search head. I dont know what step to take first? 
yes, I already follow that source too.
in my environment i have 4 indexers. daily indexeing is 50gb/day.retention period is 30 days . In these 30 days retention period  for hot bucket 10 days and for cold bucket retention period is 20 day... See more...
in my environment i have 4 indexers. daily indexeing is 50gb/day.retention period is 30 days . In these 30 days retention period  for hot bucket 10 days and for cold bucket retention period is 20 days .how can we calculate indexer storage .
Hello everyone, I am new to Splunk. I am trying to get the queue or event counts with status=“spooling” that happened after the very first error(status=“printing,error”) occurred. How could I do this... See more...
Hello everyone, I am new to Splunk. I am trying to get the queue or event counts with status=“spooling” that happened after the very first error(status=“printing,error”) occurred. How could I do this? So I have events with: sourcetype=winprintmon host=bartender2020 type=PrintJob printer="*"(gets all printer) ex: zebra1065 could have status of "printing"/"printing,error"/"spooling" so what I wanted to do is if a printer has error(status="printing,error") at 6am,  count the events of that printer that has status="spooling"(which is the queue) that occurred after 6am. Desired result format: printer name      |          Counts of spooling(queue)         | Hope this explains better, been dealing with this for days  Thank you so much in advance! 
Exactly what I needed!  Thanks!
Hi @avikc100 , you have to use the eval command to change the source value. so you could use the case statment having many values: | eval source=case( source="*PACA.log", "Canada Pricing Call... See more...
Hi @avikc100 , you have to use the eval command to change the source value. so you could use the case statment having many values: | eval source=case( source="*PACA.log", "Canada Pricing Call", source="*second_value.log" "Second value", source="*third_value.log" "Third value") Ciao. Giuseppe
this is my splunk query: index="webmethods_prd" source="/apps/WebMethods/IntegrationServer/instances/default/logs/ExternalPACA.log" |eval timestamp=strftime(_time, "%F") | chart limit=30 count as ... See more...
this is my splunk query: index="webmethods_prd" source="/apps/WebMethods/IntegrationServer/instances/default/logs/ExternalPACA.log" |eval timestamp=strftime(_time, "%F") | chart limit=30 count as count over source by timestamp it is showing result as : but I want to add a custom name to it, how should I do that?    
This is what i have so far..     <form version="1.1" theme="light"> <label>AutoSelectMulti</label> <init> <set token="pre_indexes"></set> </init> <fieldset submitButton="true" au... See more...
This is what i have so far..     <form version="1.1" theme="light"> <label>AutoSelectMulti</label> <init> <set token="pre_indexes"></set> </init> <fieldset submitButton="true" autoRun="false"> <input type="multiselect" token="server" searchWhenChanged="true"> <label>Server</label> <fieldForLabel>dns</fieldForLabel> <fieldForValue>dns</fieldForValue> <search> <query>index=summary source=sc dns=eaz* | dedup dns | table dns</query> </search> <delimiter> ,</delimiter> </input> <input type="multiselect" token="ds1"> <label>DS1</label> <fieldForLabel>host</fieldForLabel> <fieldForValue>host</fieldForValue> <search> <query>index=summary source=sc dns=eaz* | search dns IN ($server$) | dedup host | table host</query> <earliest>-30d@d</earliest> <latest>now</latest> </search> <delimiter> ,</delimiter> </input> </fieldset> <row> <panel> <table> <title>EAST DS</title> <search> <query>| makeresults | eval ServerclassInfo="[serverClass:serverclass] whitelist.0 = server1 whitelist.1 = server2 Server List which needs to add under whitelist = $server$ EAST Deployment Server : $ds$" | fields ServerclassInfo | fields - _time</query> <earliest>-30d@d</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <table> <title>West DS</title> <search> <query>| makeresults | eval ServerclassInfo="[serverClass:serverclass] whitelist.0 = server1 whitelist.1 = server2 Server List which needs to add under whitelist = $server$ WEST Deployment Server : $ds$" | fields ServerclassInfo | fields - _time</query> <earliest>-30d@d</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>       What Ultimately I'm looking is if i select 5 servers which are belongs to 3 goes to US and 2 go to UK, I want it to have two panels. The US panel shows the 3 servers with what DS. Whereas other panel shows the identical thing, but only the 2 servers . That's okay if we don't have that Deployment Server input too.    
Hi @arun97 , surely the issue is related to the VPN. Ciao. Giuseppe
I tried setting parallelIngestionPipelines = 2 in server.conf and the behavior did not change.  I also tried stopping sysmon deamon and disabling sysmon journald input. It had no effect on the abo... See more...
I tried setting parallelIngestionPipelines = 2 in server.conf and the behavior did not change.  I also tried stopping sysmon deamon and disabling sysmon journald input. It had no effect on the above behavior.