All Topics

Top

All Topics

We're migrating the Splunk from On-Premise environment to Cloud, and are done with setting up forwarders to send the data to Splunk Cloud. However, we have a large number of alerts, reports and dashb... See more...
We're migrating the Splunk from On-Premise environment to Cloud, and are done with setting up forwarders to send the data to Splunk Cloud. However, we have a large number of alerts, reports and dashboards created on Splunk On-Premise. Is there a way to transfer these (alerts, reports, and dashboards) from Splunk On-Premise to Splunk Cloud?  Thanks.
Python script to download dashboard as image or send dashboard as html mail with header footer using python or client side script. Is there a way to authenticate using sso or token to splunk url curr... See more...
Python script to download dashboard as image or send dashboard as html mail with header footer using python or client side script. Is there a way to authenticate using sso or token to splunk url current it used microsoft sso.
Hi, I have now filled out this web form twice in the last 24 hours to join the Splunk Usergroups Slack channel but I still have not received any reply as expected: https://docs.google.com/forms/d/... See more...
Hi, I have now filled out this web form twice in the last 24 hours to join the Splunk Usergroups Slack channel but I still have not received any reply as expected: https://docs.google.com/forms/d/e/1FAIpQLSd2PXSBiatZvCIpdE2wPFgnrUM29HBYjrkI0iDhlx26RwwE4A/viewform     Can anyone help as I need to join this Slack group for my current development work. Many thanks,
Hello community. I use splunk for one of my projects and i had a doubt. I have a query which roughly looks like below     index=app* rum.plugin="myPluginId" rum.status="Error" rum.apiCall="apiC... See more...
Hello community. I use splunk for one of my projects and i had a doubt. I have a query which roughly looks like below     index=app* rum.plugin="myPluginId" rum.status="Error" rum.apiCall="apiCallName" | chart count by rum.companyId     which gives the result like rum.companyId       ||        count ======================== 456789456              ||         6 827634966              ||         2 456789057              ||         4 098765456              ||         6 123456789              ||         677 And i run this query for last 24 hours. Now i want to check, if out of these companyIds listed, whether there was a similar Error occurred for these list of companies (rum.companyId) in past. If it has occurred, show the timestamp of first occurrence. So my expected output is something like rum.companyId       ||        count     ||. First occurrence Timestamp ================================================ 456789456              ||         6              ||. 20/04/90 04:04:04 827634966              ||         2              ||  20/04/90 04:04:04 456789057              ||         4              ||  20/04/90 04:04:04 098765456              ||         6              ||  20/04/90 04:04:04 123456789              ||         677         ||  20/04/90 04:04:04 Is there any way to achieve this? Thanks in advance.
Hey all, I have been needing a pdf of the .conf22 for this year to send to my job. I cannot download a pdf off the app. I need the full schedule from the Splunk university and the week conference. ... See more...
Hey all, I have been needing a pdf of the .conf22 for this year to send to my job. I cannot download a pdf off the app. I need the full schedule from the Splunk university and the week conference. Where can I download this pdf to send off? Thanks!
I'm posting this in case someone else has the problem I struggled with. I had was calculating a list of upload and download totals per webdomain per company location into a list.  The format of the ... See more...
I'm posting this in case someone else has the problem I struggled with. I had was calculating a list of upload and download totals per webdomain per company location into a list.  The format of the table was such that I ended up with the company location, followed by a multivalued list of the web domains, and a multivalued list of the bytes totals.  The bytes totals being 7 to 8 digit numbers are easier to read with commas but the usual formatting solution: eval Download=tostring(Download, "commas") eval Upload=tostring(Upload, "commas") Had mixed results depending on where in the query I placed it.  After the initial transformation command, it messed up my sorting since now it was a string.  At the end, it summed the multi-value field and then put the commas in so that didn't help.  
I recently discovered that "tstats" is returning sourcetypes which do not exist.  Query:  | tstats values(sourcetype) where index=* by index  This returns a list of sourcetypes grouped by index... See more...
I recently discovered that "tstats" is returning sourcetypes which do not exist.  Query:  | tstats values(sourcetype) where index=* by index  This returns a list of sourcetypes grouped by index. While it appears to be mostly accurate, some sourcetypes which are returned for a given index do not exist. For example, the sourcetype "WinEventLog:System" is returned for myindex, but the following query produces zero results:  index=myindex sourcetype="WinEventLog:System" This is the case for multiple indexes. If my understanding of "tstats" is correct, it works by only analyzing indexed fields which are stored in the tsidx files. If no events exist with a given sourcetype for a specific index, how could that value have possibly been saved in the tsidx files? 
Hi Splunkers, This may be easy, but I'm not able to solve it if anyone can help. I want to set a lower threshold to 15 standard deviation below the mean, and the upper threshold to 15 standard de... See more...
Hi Splunkers, This may be easy, but I'm not able to solve it if anyone can help. I want to set a lower threshold to 15 standard deviation below the mean, and the upper threshold to 15 standard deviation above the mean, but I'm not sure how to implement that.  Thanks!
I’m trying to get a count for activity on around 10 different APIs. The search is: index=api_logs | bin span=5min _time | stats count by _time, APIName Is it possible to use stats count so the out... See more...
I’m trying to get a count for activity on around 10 different APIs. The search is: index=api_logs | bin span=5min _time | stats count by _time, APIName Is it possible to use stats count so the output includes an entry for each API in each 10 minute period and report a ‘0’ if there hasn’t been a call. I know you could chart it but I’d like the data in this particular format.  
I am trying to use the correlate command in Splunk but keep receiving "1.0" or other numbers as the correlation value when it should not. For example, I have two columns in my table, each with values... See more...
I am trying to use the correlate command in Splunk but keep receiving "1.0" or other numbers as the correlation value when it should not. For example, I have two columns in my table, each with values "increase" or "decrease" based on how much data it is ingesting hour to hour. When I use correlate after that, however, I get 1.0 as the correlation value when it is not 100%. So what exactly is the command correlating, is it not the table? Is it something with the indexes behind the scenes? Also, how do you use parentheses after the correlate command to input fields? All help is appreciated, I have been working on this for a while.
Hi, Appd connects to our springboot applications for metrics. Presently appd starts first and takes more than 3 minutes for up and running next our application starts up. We need help in improving i... See more...
Hi, Appd connects to our springboot applications for metrics. Presently appd starts first and takes more than 3 minutes for up and running next our application starts up. We need help in improving in speeding up appd startup time and we want to startup our springboot application in parallel to appd startup instead of waiting for 3 + minutes for appd to startup.  Thanks in advance for your help Sud
I have a scenario where I am analyzing the format of a given string to determine what the name of the format is (e.g. UPN, Samaccount, etc)  From there, I am trying to do a conditional enrichment via... See more...
I have a scenario where I am analyzing the format of a given string to determine what the name of the format is (e.g. UPN, Samaccount, etc)  From there, I am trying to do a conditional enrichment via lookup to determine more information about the user in question.  The trouble is I have 4 "potential" systems of record the account could come from, and different authoritative key/value pairs to uniquely identify the user.  The good news is there is at least one value in each of these systems of record that is the same thing, so I need to normalize that down. My method of attacking this: user=jimbob@joe.com AccountType=(formula to determine "samact", "upn", or "other) I have to use lookup because inputlookup does not appear to have any idea what $variables$ are in an eval statement. SOR1_upn=if(AccountType = "upn", [makeresults count=1 |eval user=$user$ | lookup SOR1.csv userPrincipleName AS user | fields givenName |head 1|return $givenName], "") I would have expected this to work using normal subsearch logic, so I dont know if its a problem using it with eval or if there is some additional escape character I should be providing. Another method I thought of  for attacking this is just to create unique values for every possible outcome I want by  from the different SOR's with unique names, and then coalesce them all together but this seems like there should be a more elegant way to do this in splunk.   In summary, Identify the type of account it is, check 4 different sors for the presence of that account, return a fixed set of values that should ideally all represent the same individual if they do exist in more than one place from each one, and then coalesce them together
index=wineventlog EventCode=4625 | search user!="sa*" AND user!="VD*" AND user_email!="" | bucket _time span=10m | eval minute=strftime(_time, "%M") | eval hour=strftime(_time, "%H") | eval day=s... See more...
index=wineventlog EventCode=4625 | search user!="sa*" AND user!="VD*" AND user_email!="" | bucket _time span=10m | eval minute=strftime(_time, "%M") | eval hour=strftime(_time, "%H") | eval day=strftime(_time, "%D") | eval wday=strftime(_time, "%A") | stats count(EventCode) as aantal by hour, wday, day | rename aantal as #_failed_logins | eval search_value = wday+"_"+hour | table hour, day, wday, search_value, #_failed_logins, upperBound, upperBound_2stdev, upperBound_2.5stdev, upperBound_3stdev, upperBound_3.5stdev, upperBound_4stdev, twoSigmaLimit, hour_avg, hour_avg_2sig, hour_stdev, hour_stdev_2sig     Every day this query gives a different count 
Hi everyone! I would appreciate your help with the following search, I can't find how to do that,  I need to add the customer name to the list of hosts  1. the below search return a list of hos... See more...
Hi everyone! I would appreciate your help with the following search, I can't find how to do that,  I need to add the customer name to the list of hosts  1. the below search return a list of hosts and their Guid with certificates that going to be expired : index= indexname environment=prod | eval host=rtrim(host, ".prod.net") | eval host=(host."-prod") |lookup host-guid hostName as host Output hostGuid |table host hostGuid 2. the below search return the customer name per host : | inputlookup workspace where poolGuid!=* [| inputlookup workspaceServer where hostGuid=".*" | rename workspaceServerGuid as currentWorkspaceServerGuid | return currentWorkspaceServerGuid] | lookup workspaceServer workspaceServerGuid as currentWorkspaceServerGuid output hostGuid name as core | lookup host hostGuid output hostName | rename currentCustomerGuid as customerGuid name as workspaceName | lookup customer customerGuid output name as customerName | stats count by hostName hostGuid core customerName customerGuid workspaceName workspaceGuid | fields - count how I can combine for those 2 queries and get the customer name just for hosts from the first search #1  Thank you
Hi I need to update the Universal Forwarder credential package manually. Due to our configuration, I can't follow the steps out line here in this document. I unpacked the `.spl` file that's required... See more...
Hi I need to update the Universal Forwarder credential package manually. Due to our configuration, I can't follow the steps out line here in this document. I unpacked the `.spl` file that's required for the update and noticed that it follows the directory structure of our current splunk configuration. Is there a way we can manually unpack and make this update?    What does the '/opt/splunkforwarder/bin/splunk install app' actually do with the .spl package? 
Hi all, I have a json file in the format, { "NUM":"5", "EXECUTION_DATE":04-07-2022, "STATUS":"FAILURE", "DURATION":5 hrs, 13 mins, "PARTS":[ { "NAME":"abc", "PART_NO":[ "2634702", "263445... See more...
Hi all, I have a json file in the format, { "NUM":"5", "EXECUTION_DATE":04-07-2022, "STATUS":"FAILURE", "DURATION":5 hrs, 13 mins, "PARTS":[ { "NAME":"abc", "PART_NO":[ "2634702", "2634456","2634890",] }, { "NAME":"xyz", "PART_NO":[ "2634702", ] }, ] } I wanted to calculate the count of PART_NO and plot it in a chart. The PART_NO are repeating and i want to calculate the repeated value also, i used count here. I used |timechart count(PARTS{}.PART_NO{}) but it is giving wrong count. Is there any different method to calculate the count?
Hey, I have an inputlookup and I need to perform a stats values on one of the columns "Migration Comments". So, I am able to use the stats functions on every column EXCEPT the one column I actually ... See more...
Hey, I have an inputlookup and I need to perform a stats values on one of the columns "Migration Comments". So, I am able to use the stats functions on every column EXCEPT the one column I actually need to perform the function on. It seems recognises the field name, even though I am copying the name of the field into the query. Here is the data table:   And here is what the query I am trying to run looks like?   What am I doing wrong??? Many thanks,
So, I'm looking for a way to synchronize Custom Lists to git the same way Playbooks and Custom Functions are synchronized. Is there a baked-in way to do it?
Hello Splunkers, A few days ago most of serverclasses on our Deployment Server uninstalled itself an output app. As a result, splunkd was restarted on UFs and data stopped being forwarded from ho... See more...
Hello Splunkers, A few days ago most of serverclasses on our Deployment Server uninstalled itself an output app. As a result, splunkd was restarted on UFs and data stopped being forwarded from hosts. For info, each serverclass in our environment consists of a deployment app with inputs.conf where we specify sources and another deployment app  called 'output_app' with outputs.conf to get data forwarded to indexer cluster. Example logs from one of affected UFs: 06-29-2022 12:15:47.893 +0200 INFO DeployedServerclass - Serverclass=inputs_test_prod is uninstalling app=/opt/splunkforwarder/etc/apps/output_app 06-29-2022 12:15:47.893 +0200 INFO DeployedApplication - Removing app=output_app at='/opt/splunkforwarder/etc/apps/output_app' 06-29-2022 12:15:47.904 +0200 WARN DC:DeploymentClient - Restarting Splunkd... 06-29-2022 12:15:47.905 +0200 WARN Restarter - Splunkd is configured to run as a systemd service, skipping external restart process 06-29-2022 12:15:47.905 +0200 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_123.456.7.89_8089_z1il0123.xyz.ai_z1il0123.zyx.ai_95C4E8F1-731A-4280-9F09-93B03EAFB3DE 06-29-2022 12:15:48.206 +0200 INFO loader - Shutdown HTTPDispatchThread 06-29-2022 12:15:48.206 +0200 INFO ShutdownHandler - Shutting down splunkd 06-29-2022 12:15:48.206 +0200 INFO ShutdownHandler - shutting down level "ShutdownLevel_Begin" 06-29-2022 12:15:48.207 +0200 INFO ShutdownHandler - shutting down level "ShutdownLevel_FileIntegrityChecker" 06-29-2022 12:15:48.207 +0200 INFO ShutdownHandler - shutting down level "ShutdownLevel_JustBeforeKVStore" 06-29-2022 12:15:48.207 +0200 INFO ShutdownHandler - shutting down level "ShutdownLevel_KVStore" 06-29-2022 12:15:48.207 +0200 INFO ShutdownHandler - shutting down level "ShutdownLevel_DFM" 06-29-2022 12:15:48.207 +0200 INFO ShutdownHandler - shutting down level "ShutdownLevel_Thruput" 06-29-2022 12:15:48.207 +0200 INFO ShutdownHandler - shutting down level "ShutdownLevel_TcpInput1" 06-29-2022 12:15:48.207 +0200 INFO TcpInputProc - Running shutdown level 1. Closing listening ports. 06-29-2022 12:15:48.207 +0200 INFO TcpInputProc - Done setting shutdown in progress signal. outputs.conf  # Turn off indexing on the master [indexAndForward] index = false [tcpout] defaultGroup = splunk_prod forwardedindex.filter.disable = true indexAndForward = false [tcpout:splunk_prod] server=z1il0001.zyx.ai.zz:9997,z1il0002.zyx.ai.zz:9997, z1il0003.zyx.ai.zz:9997, z1il0004.zyx.ai.zz:9997, z1il0005.zyx.ai.zz:9997, z1il0006.zyx.ai.zz:9997 autoLB = true Have you ever encountered such an issue? How it is possible that serverclass gets rids off an app itself? Last changes that we did was a Deployment Server upgrade from 8.2.3.3 to 9.0, but we did it on 24.06.  Any idea what can be a root cause? Greetings, Dzasta
Hello I have an on prem indexer which i want to shot down and move all his context to another indexer is Azure What is the best practice for that ? Thanks