All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I got a visualization that counts the total amount of errors using a lookup. Instead of the actual number of events I'd like to get a percentage of specifically errors. Image attached for reference  ... See more...
I got a visualization that counts the total amount of errors using a lookup. Instead of the actual number of events I'd like to get a percentage of specifically errors. Image attached for reference      | inputlookup fm4143_3d.csv | stats count(ERROR_MESSAGE) ```| appendpipe [| stats count as message | eval message=if(message==0,"", " ")] | fields - message ```
I have a field called eventtime in my logs, but the time is 19 characters long in epoch time (which goes to nanoseconds).  The field is in the middle of the events, but I want to use it as the timest... See more...
I have a field called eventtime in my logs, but the time is 19 characters long in epoch time (which goes to nanoseconds).  The field is in the middle of the events, but I want to use it as the timestamp.  However, when I, through the UI, define the TIME_PREFIX, it won't recognize it.  However, there is another field that also has epoch time, but only 10 characters.  When I use it, it works...just doesn't give me the nanoseconds.  So, it's not a syntax issue.  There are no periods in the timestamp.  How can I fix this - using the UI for testing is easier to get feedback, but if I need to modify it in props.conf, that's fine? Additional context: The data comes in in json format, but only uses single quotes.  I fixed this by using sedcmd in props.conf to swap the single quote with double quotes.  In the TIME_PREFIX box (again, in the UI), I used single quotes as double quotes didn't work (which makes sense). 'eventtime': '1707613171105400540' 'itime_t': 1707613170'  
I am using splunk cloud. As admin i created a new user but the user is yet to get an email notification with the necessary login details. Please what might be the issue
in my environment i have 4 indexers. daily indexeing is 50gb/day.retention period is 30 days . In these 30 days retention period  for hot bucket 10 days and for cold bucket retention period is 20 day... See more...
in my environment i have 4 indexers. daily indexeing is 50gb/day.retention period is 30 days . In these 30 days retention period  for hot bucket 10 days and for cold bucket retention period is 20 days .how can we calculate indexer storage .
Hello everyone, I am new to Splunk. I am trying to get the queue or event counts with status=“spooling” that happened after the very first error(status=“printing,error”) occurred. How could I do this... See more...
Hello everyone, I am new to Splunk. I am trying to get the queue or event counts with status=“spooling” that happened after the very first error(status=“printing,error”) occurred. How could I do this? So I have events with: sourcetype=winprintmon host=bartender2020 type=PrintJob printer="*"(gets all printer) ex: zebra1065 could have status of "printing"/"printing,error"/"spooling" so what I wanted to do is if a printer has error(status="printing,error") at 6am,  count the events of that printer that has status="spooling"(which is the queue) that occurred after 6am. Desired result format: printer name      |          Counts of spooling(queue)         | Hope this explains better, been dealing with this for days  Thank you so much in advance! 
this is my splunk query: index="webmethods_prd" source="/apps/WebMethods/IntegrationServer/instances/default/logs/ExternalPACA.log" |eval timestamp=strftime(_time, "%F") | chart limit=30 count as ... See more...
this is my splunk query: index="webmethods_prd" source="/apps/WebMethods/IntegrationServer/instances/default/logs/ExternalPACA.log" |eval timestamp=strftime(_time, "%F") | chart limit=30 count as count over source by timestamp it is showing result as : but I want to add a custom name to it, how should I do that?    
  Splunk support portal doesn't let file a case as it expects an input "Splunk Support access to your company data" However no option is available to select.
Hello Splunkers!! I want to achieve below results in Splunk. Please help me how to achieve this in SPL. Whenever the field is carrying number string then I want below expected results. Current r... See more...
Hello Splunkers!! I want to achieve below results in Splunk. Please help me how to achieve this in SPL. Whenever the field is carrying number string then I want below expected results. Current results Expected values 1102.1.1 1102.01.01 1102.1.2 1102.01.02 Thanks in advance!!
Hello ALL,   I installed On-Premises AppDynamics 24.7 on Rocky Linux 9.4 host. After complete the Enterprise Console installation (through installation script "platform-setup-x64-linux-24.7.0.10038.... See more...
Hello ALL,   I installed On-Premises AppDynamics 24.7 on Rocky Linux 9.4 host. After complete the Enterprise Console installation (through installation script "platform-setup-x64-linux-24.7.0.10038.sh", I continued to setup the Controller (demo profile) and Events Service. The three jobs completed successfully, as shown below. Controller starts OK. But Events Service can not start up. There is Red Critical health status highlighted. The error message: Task failed: Starting the Events Service api store node ... How to make Events Service get started up ? Thanks.
Hello everyone, I am trying to get the queue or event counts with status=“spooling” that happened after the very first error(status=“*error*”) occurred. How could I do this? Thank you in advance.  ... See more...
Hello everyone, I am trying to get the queue or event counts with status=“spooling” that happened after the very first error(status=“*error*”) occurred. How could I do this? Thank you in advance.  this is for our company’s printer server. 
Good morning,  I have been looking for a solution to this problem for a while. What I am trying to accomplish is re-ingesting .evtx files back into the system or another system so that I can use a U... See more...
Good morning,  I have been looking for a solution to this problem for a while. What I am trying to accomplish is re-ingesting .evtx files back into the system or another system so that I can use a UF to re-ingest old logs that have been exported and archived. I hope I am clear as it is hard for me to articulate the ask.  old .evtx files -> Windows Machine (put the logs back into the Windows machine) Which will then allow me to use a UF to send re-ingested logs to Splunk. I have tried converting the evtx files to text with a PowerShell script, but this would take a significant amount of time due to the size of my current evtx files. On average it was taking about 30 minutes per log file, and I have too many to count. 
We were running Splunk Enterprise v9.2 on our Deployment Server.  Everything worked fine.... Upgraded to v9.3.0, now the path "https://<fqhn>/en-US/manager/system/deploymentserver" no longer renders.... See more...
We were running Splunk Enterprise v9.2 on our Deployment Server.  Everything worked fine.... Upgraded to v9.3.0, now the path "https://<fqhn>/en-US/manager/system/deploymentserver" no longer renders. Tried on 3 computers using several different browsers.  All return a blank white screen on this URL only.  All other dashboards on this host work fine, it is only the "Forwarder Manager" link.    Nothing in the logs other than INFO events, and nothing to indicate a problem.  Any ideas what is going on?
I use a stats command in a search in a dashboard which results in about 600 rows. Splunk places a "next" button in the dashboard for each 100 rows (option name="count" is 100). We deliver the result... See more...
I use a stats command in a search in a dashboard which results in about 600 rows. Splunk places a "next" button in the dashboard for each 100 rows (option name="count" is 100). We deliver the result of this dashboard as a pdf so much of the results get lost. I can solve this by using "streamstats" to show the result in parts but I wonder why the limit is 100 and if it is possible to display more than 100 rows at once (without using tricks like streamstats).
Hi,  ok, so updated AME to version 3.0.8. Now i cant access anything, even though I am sc_admin.    cant see the start, cant configure due to the fact that is says I must be sc_admin.    Checked... See more...
Hi,  ok, so updated AME to version 3.0.8. Now i cant access anything, even though I am sc_admin.    cant see the start, cant configure due to the fact that is says I must be sc_admin.    Checked users and roles and they are fine.    any thoughts?
The classic dashboard format was xml; the new Dashboard Studio format is json. Our app/launcher/home is failing to load json dashboards with a 400 Bad Request, displaying the "horse" and complaining ... See more...
The classic dashboard format was xml; the new Dashboard Studio format is json. Our app/launcher/home is failing to load json dashboards with a 400 Bad Request, displaying the "horse" and complaining that the first line must be xml. How do we remove this restriction? Thank you.
Hi Splunk community,   I'm facing an issue with my Splunk deployment server, running on version 9.2.1 (splunk-9.2.1-78803f08aabb-linux-2.6-x86_64-manifest). I’ve added new configurations to the inp... See more...
Hi Splunk community,   I'm facing an issue with my Splunk deployment server, running on version 9.2.1 (splunk-9.2.1-78803f08aabb-linux-2.6-x86_64-manifest). I’ve added new configurations to the inputs.conf file for a WebLogic server within a specific deployment class. After making these changes, I pushed the configurations to the target WebLogic server and triggered a restart. Unfortunately, the new settings in the inputs.conf file are not being applied to the WebLogic server, even though the deployment server logs indicate that the service was successfully restarted. Has anyone experienced this issue or can offer advice on what might be causing the problem and how to resolve it? Thanks in advance!
Hi All, Need help with Timechart and trendline command for below query Both timechart and trendline command are not working index=_introspection sourcetype=splunk_resource_usage component=Hostwi... See more...
Hi All, Need help with Timechart and trendline command for below query Both timechart and trendline command are not working index=_introspection sourcetype=splunk_resource_usage component=Hostwide | eval total_cpu_usage=('data.cpu_system_pct' + 'data.cpu_user_pct') | stats Perc90(total_cpu_usage) AS cpu_usage latest(_time) as _time by Env Tenant | timechart span=12h values(cpu_usage) as CPU | trendline sma2(CPU) AS trend
There is a request to provide the list of P1C alerts for  JMET cluster from Splunk we have provided the following query, but user wants only priority will be P1C | rest /servicesNS/-/-/saved/searc... See more...
There is a request to provide the list of P1C alerts for  JMET cluster from Splunk we have provided the following query, but user wants only priority will be P1C | rest /servicesNS/-/-/saved/searches | table title, eai:acl.owner, search, actions, action.apple_alertaction * This query is giving all the alerts configured but we want only P1C alerts. Its urgent.
Hi all, I installed splunk enterprise 9.2.1 on my machine recently. There are no other external apps or components installed. But the UI is very slow. The loading time for each webpage, including th... See more...
Hi all, I installed splunk enterprise 9.2.1 on my machine recently. There are no other external apps or components installed. But the UI is very slow. The loading time for each webpage, including the login page is slow. It takes around a minute to finish loading. Could anyone provide some suggestions as to why this is happening and how to fix it?
Hi Splunk experts, I want to compare the response code of our API for last 4 hours with last 2 days data over the same time. And if possible I would need results in a chart/table format where i... See more...
Hi Splunk experts, I want to compare the response code of our API for last 4 hours with last 2 days data over the same time. And if possible I would need results in a chart/table format where it shows the data as below. <Reponse Codes | Last 4 Hours | Yesterday | Day before Yesterday> As of now i am getting results in hours wise. Can we achieve this one in Splunk ? Can you guys please guide me in the right direction to achieve this.