All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You haven't explained a fundamental part of the problem - how do you know which servers go in to US and which servers go into UK (or EAST and WEST as in your example). You need to know how you can t... See more...
You haven't explained a fundamental part of the problem - how do you know which servers go in to US and which servers go into UK (or EAST and WEST as in your example). You need to know how you can tell if server 1 belongs to panel 1 or panel 2. Then you simply need to have a base search that splits up the selected servers according to their region, e.g. | makeresults | eval servers=split($server|s$, ",") | eval region=<<DEFINE YOUR LOGIC HERE TO CREATE REGION BASED ON HOST>>> | stats values(server) as server by region | eval server=mvjoin(server, ",") | transpose 0 header_field=region and then you have a <done> clause where you set the tokens for each panel accordingly <done> <set token="region_1">$result.region_1$</set> <set token="region_2">$result.region_2$</set> </done> and you then use the region_1 and 2 tokens in your panels instead of $server$
Colours are assigned to series i.e. all bars from the same series are in the same colour. This is because of the way they are drawn in the chart viz i.e. they are drawn as a single shape for the whol... See more...
Colours are assigned to series i.e. all bars from the same series are in the same colour. This is because of the way they are drawn in the chart viz i.e. they are drawn as a single shape for the whole series, not individual bars. If you want them to have different colours, they need to be different series. Think of the table of data, all data points in the same column of the table will have the same colour in the chart.
Not sure if this is feasible. Basically I would like a chart that shows the average of a statistic for different nodes and distinct count of different nodes. so the 2 searches would be something like... See more...
Not sure if this is feasible. Basically I would like a chart that shows the average of a statistic for different nodes and distinct count of different nodes. so the 2 searches would be something like: 1. index=xxx sourcetype=yyy |timechart avg(stat1) by node 2. index=xxx sourcetype=yyy|timechart dc(node) Both searches would showup on the same timechart panel for the same period with the same time span. Sorry if this is unclear, happy to clarify. I tried eventstats, append, appendcols, and join, but they do not seem to work for this. Could be I'm misusing them though.
The original query: host="MEIPC" source="WinEventLog:Application" OR source="WinEventLog:Security" OR source="WinEventLog:System" |chart count by source A could be solution I could not get to wor... See more...
The original query: host="MEIPC" source="WinEventLog:Application" OR source="WinEventLog:Security" OR source="WinEventLog:System" |chart count by source A could be solution I could not get to work: | top limit=10 class showperc=f countfield="source" | reverse | transpose header_field="Class" column_name="Class" | search class="source" So I tried searching all over to change the color of the bars of each of 3 sources I gathered data from. I put it in the dashboard and I noticed that it groups it all under an encompassing source, without an individual option for each source. This is labeled under the X axis. However, when I try to change the color of the bars, only changing the color of count which is the Y axis changes the color of the bars. This confuses me because I would think that I can simply change the color options in the menus of dashboard for each individual  X axis source but instead its the Y axis count that changes the color of the bars, and there is no option to change the coloration to the X axis source. What also confuses me, is when I look at statistics, there are 3 sources to gather the data from. Please leave a comment if you have the time, thank you so much Splunk Community!
I don't want to get too specific, because it may work different for different environments, but it they keys were: mount.cifs to mount the windows drive on athe destination linux machine rsync -avh... See more...
I don't want to get too specific, because it may work different for different environments, but it they keys were: mount.cifs to mount the windows drive on athe destination linux machine rsync -avhipP copied from the windows drive to the linux drive and adjusted from a windows file structure to a linux file structure.
I got a visualization that counts the total amount of errors using a lookup. Instead of the actual number of events I'd like to get a percentage of specifically errors. Image attached for reference  ... See more...
I got a visualization that counts the total amount of errors using a lookup. Instead of the actual number of events I'd like to get a percentage of specifically errors. Image attached for reference      | inputlookup fm4143_3d.csv | stats count(ERROR_MESSAGE) ```| appendpipe [| stats count as message | eval message=if(message==0,"", " ")] | fields - message ```
I did not find anything weird about the interface stats. Similar problem occurs in all Linux nodes, but differs in period/delay. There is btool output configuration     
I am getting blank value   index="webmethods_prd" host="USPGH-WMA2AISP*" source="/apps/WebMethods/IntegrationServer/instances/default/logs/ExternalPAEU.log" ("success" OR "fail*")| eval stat... See more...
I am getting blank value   index="webmethods_prd" host="USPGH-WMA2AISP*" source="/apps/WebMethods/IntegrationServer/instances/default/logs/ExternalPAEU.log" ("success" OR "fail*")| eval status = if(searchmatch("success"), "Success", "Error")|stats count by source,status| xyseries source status count| eval source=case( source="*PAEU.log", "Canada Pricing Call")
I am getting blank value index="webmethods_prd" host="USPGH-WMA2AISP*" source="/apps/WebMethods/IntegrationServer/instances/default/logs/ExternalPAEU.log" ("success" OR "fail*")| eval status =... See more...
I am getting blank value index="webmethods_prd" host="USPGH-WMA2AISP*" source="/apps/WebMethods/IntegrationServer/instances/default/logs/ExternalPAEU.log" ("success" OR "fail*")| eval status = if(searchmatch("success"), "Success", "Error")|stats count by source,status| xyseries source status count| eval source=case( source="*PACA.log", "Canada Pricing Call")
Search level extraction : I am thinking instead of json extraction I think it could be easy to add index extraction as json so that each event will treat as json message and the remaining few lines w... See more...
Search level extraction : I am thinking instead of json extraction I think it could be easy to add index extraction as json so that each event will treat as json message and the remaining few lines we can do extraction at the search level. let me know your thoughts ? @KendallW 
I want to do everything at the indexer level. Any idea on how to handle this data set ? 
Could you please confirm what is the updated query ? I am using below query. No result found. Please suggest index=whatever yoursearchterms | bin _time span=1s | stats count AS TPS by _time ser... See more...
Could you please confirm what is the updated query ? I am using below query. No result found. Please suggest index=whatever yoursearchterms | bin _time span=1s | stats count AS TPS by _time service | stats max(TPS) AS "NaxTPS" min(TPS) AS "MinTPS" avg(TPS) AS "AVG TPS" by service
Hi Have you try to use bigger value than 100 in count? Maybe another option is to use this https://splunkbase.splunk.com/app/7171 ? r. Ismo
Hi have you look and try this? https://community.splunk.com/t5/Getting-Data-In/Ingesting-offline-Windows-Event-logs-from-different-systems/m-p/649515 r. Ismo
Sorry for the delay I'm still having issues I have posted my update.
I have updated tomcat.service with tools.jar but it is still not working. Environment='JAVA_OPTS=-Djava.awt.headless=true -javaagent:/opt/appdynamics/javaagent.jar -Dappdynamics.agent.applicationNam... See more...
I have updated tomcat.service with tools.jar but it is still not working. Environment='JAVA_OPTS=-Djava.awt.headless=true -javaagent:/opt/appdynamics/javaagent.jar -Dappdynamics.agent.applicationName=biznuvo_test1_node2 -Dappdynamics.agent.tierName=biznuvo_test1_node2 -Dappdynamics.agent.nodeName=biznuvo_test1_node2-i-0e42a26c009127da0 -Dappdynamics.agent.uniqueHostId=biznuvo_TEST1_node2_ip-172_31_31_131 -Xbootclasspath:/usr/lib/jvm/jdk-1.8-oracle-x64/jre/lib/ext/tools.jar'
Based on number of your log events it had been surprise if that was helped. Have you look network interface stats, if there is something weird? Was it so, that this same issue was in all your Linux... See more...
Based on number of your log events it had been surprise if that was helped. Have you look network interface stats, if there is something weird? Was it so, that this same issue was in all your Linux uf nodes? If yes then it heavily pointed to some configuration issue! Can you show your outputs.conf settings exported by btool with —debug option?
I have a field called eventtime in my logs, but the time is 19 characters long in epoch time (which goes to nanoseconds).  The field is in the middle of the events, but I want to use it as the timest... See more...
I have a field called eventtime in my logs, but the time is 19 characters long in epoch time (which goes to nanoseconds).  The field is in the middle of the events, but I want to use it as the timestamp.  However, when I, through the UI, define the TIME_PREFIX, it won't recognize it.  However, there is another field that also has epoch time, but only 10 characters.  When I use it, it works...just doesn't give me the nanoseconds.  So, it's not a syntax issue.  There are no periods in the timestamp.  How can I fix this - using the UI for testing is easier to get feedback, but if I need to modify it in props.conf, that's fine? Additional context: The data comes in in json format, but only uses single quotes.  I fixed this by using sedcmd in props.conf to swap the single quote with double quotes.  In the TIME_PREFIX box (again, in the UI), I used single quotes as double quotes didn't work (which makes sense). 'eventtime': '1707613171105400540' 'itime_t': 1707613170'  
I am using splunk cloud. As admin i created a new user but the user is yet to get an email notification with the necessary login details. Please what might be the issue
Hi @haleyh44 , here you can find the process of migration from a stand alone indexer to a clustered environment. https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Migratenon-clusteredindex... See more...
Hi @haleyh44 , here you can find the process of migration from a stand alone indexer to a clustered environment. https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Migratenon-clusteredindexerstoaclusteredenvironment Ciao. Giuseppe