All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to use the correlate command in Splunk but keep receiving "1.0" or other numbers as the correlation value when it should not. For example, I have two columns in my table, each with values... See more...
I am trying to use the correlate command in Splunk but keep receiving "1.0" or other numbers as the correlation value when it should not. For example, I have two columns in my table, each with values "increase" or "decrease" based on how much data it is ingesting hour to hour. When I use correlate after that, however, I get 1.0 as the correlation value when it is not 100%. So what exactly is the command correlating, is it not the table? Is it something with the indexes behind the scenes? Also, how do you use parentheses after the correlate command to input fields? All help is appreciated, I have been working on this for a while.
Hi, Appd connects to our springboot applications for metrics. Presently appd starts first and takes more than 3 minutes for up and running next our application starts up. We need help in improving i... See more...
Hi, Appd connects to our springboot applications for metrics. Presently appd starts first and takes more than 3 minutes for up and running next our application starts up. We need help in improving in speeding up appd startup time and we want to startup our springboot application in parallel to appd startup instead of waiting for 3 + minutes for appd to startup.  Thanks in advance for your help Sud
I have a scenario where I am analyzing the format of a given string to determine what the name of the format is (e.g. UPN, Samaccount, etc)  From there, I am trying to do a conditional enrichment via... See more...
I have a scenario where I am analyzing the format of a given string to determine what the name of the format is (e.g. UPN, Samaccount, etc)  From there, I am trying to do a conditional enrichment via lookup to determine more information about the user in question.  The trouble is I have 4 "potential" systems of record the account could come from, and different authoritative key/value pairs to uniquely identify the user.  The good news is there is at least one value in each of these systems of record that is the same thing, so I need to normalize that down. My method of attacking this: user=jimbob@joe.com AccountType=(formula to determine "samact", "upn", or "other) I have to use lookup because inputlookup does not appear to have any idea what $variables$ are in an eval statement. SOR1_upn=if(AccountType = "upn", [makeresults count=1 |eval user=$user$ | lookup SOR1.csv userPrincipleName AS user | fields givenName |head 1|return $givenName], "") I would have expected this to work using normal subsearch logic, so I dont know if its a problem using it with eval or if there is some additional escape character I should be providing. Another method I thought of  for attacking this is just to create unique values for every possible outcome I want by  from the different SOR's with unique names, and then coalesce them all together but this seems like there should be a more elegant way to do this in splunk.   In summary, Identify the type of account it is, check 4 different sors for the presence of that account, return a fixed set of values that should ideally all represent the same individual if they do exist in more than one place from each one, and then coalesce them together
index=wineventlog EventCode=4625 | search user!="sa*" AND user!="VD*" AND user_email!="" | bucket _time span=10m | eval minute=strftime(_time, "%M") | eval hour=strftime(_time, "%H") | eval day=s... See more...
index=wineventlog EventCode=4625 | search user!="sa*" AND user!="VD*" AND user_email!="" | bucket _time span=10m | eval minute=strftime(_time, "%M") | eval hour=strftime(_time, "%H") | eval day=strftime(_time, "%D") | eval wday=strftime(_time, "%A") | stats count(EventCode) as aantal by hour, wday, day | rename aantal as #_failed_logins | eval search_value = wday+"_"+hour | table hour, day, wday, search_value, #_failed_logins, upperBound, upperBound_2stdev, upperBound_2.5stdev, upperBound_3stdev, upperBound_3.5stdev, upperBound_4stdev, twoSigmaLimit, hour_avg, hour_avg_2sig, hour_stdev, hour_stdev_2sig     Every day this query gives a different count 
Hi everyone! I would appreciate your help with the following search, I can't find how to do that,  I need to add the customer name to the list of hosts  1. the below search return a list of hos... See more...
Hi everyone! I would appreciate your help with the following search, I can't find how to do that,  I need to add the customer name to the list of hosts  1. the below search return a list of hosts and their Guid with certificates that going to be expired : index= indexname environment=prod | eval host=rtrim(host, ".prod.net") | eval host=(host."-prod") |lookup host-guid hostName as host Output hostGuid |table host hostGuid 2. the below search return the customer name per host : | inputlookup workspace where poolGuid!=* [| inputlookup workspaceServer where hostGuid=".*" | rename workspaceServerGuid as currentWorkspaceServerGuid | return currentWorkspaceServerGuid] | lookup workspaceServer workspaceServerGuid as currentWorkspaceServerGuid output hostGuid name as core | lookup host hostGuid output hostName | rename currentCustomerGuid as customerGuid name as workspaceName | lookup customer customerGuid output name as customerName | stats count by hostName hostGuid core customerName customerGuid workspaceName workspaceGuid | fields - count how I can combine for those 2 queries and get the customer name just for hosts from the first search #1  Thank you
Hi I need to update the Universal Forwarder credential package manually. Due to our configuration, I can't follow the steps out line here in this document. I unpacked the `.spl` file that's required... See more...
Hi I need to update the Universal Forwarder credential package manually. Due to our configuration, I can't follow the steps out line here in this document. I unpacked the `.spl` file that's required for the update and noticed that it follows the directory structure of our current splunk configuration. Is there a way we can manually unpack and make this update?    What does the '/opt/splunkforwarder/bin/splunk install app' actually do with the .spl package? 
Hi all, I have a json file in the format, { "NUM":"5", "EXECUTION_DATE":04-07-2022, "STATUS":"FAILURE", "DURATION":5 hrs, 13 mins, "PARTS":[ { "NAME":"abc", "PART_NO":[ "2634702", "263445... See more...
Hi all, I have a json file in the format, { "NUM":"5", "EXECUTION_DATE":04-07-2022, "STATUS":"FAILURE", "DURATION":5 hrs, 13 mins, "PARTS":[ { "NAME":"abc", "PART_NO":[ "2634702", "2634456","2634890",] }, { "NAME":"xyz", "PART_NO":[ "2634702", ] }, ] } I wanted to calculate the count of PART_NO and plot it in a chart. The PART_NO are repeating and i want to calculate the repeated value also, i used count here. I used |timechart count(PARTS{}.PART_NO{}) but it is giving wrong count. Is there any different method to calculate the count?
Hey, I have an inputlookup and I need to perform a stats values on one of the columns "Migration Comments". So, I am able to use the stats functions on every column EXCEPT the one column I actually ... See more...
Hey, I have an inputlookup and I need to perform a stats values on one of the columns "Migration Comments". So, I am able to use the stats functions on every column EXCEPT the one column I actually need to perform the function on. It seems recognises the field name, even though I am copying the name of the field into the query. Here is the data table:   And here is what the query I am trying to run looks like?   What am I doing wrong??? Many thanks,
So, I'm looking for a way to synchronize Custom Lists to git the same way Playbooks and Custom Functions are synchronized. Is there a baked-in way to do it?
Hello Splunkers, A few days ago most of serverclasses on our Deployment Server uninstalled itself an output app. As a result, splunkd was restarted on UFs and data stopped being forwarded from ho... See more...
Hello Splunkers, A few days ago most of serverclasses on our Deployment Server uninstalled itself an output app. As a result, splunkd was restarted on UFs and data stopped being forwarded from hosts. For info, each serverclass in our environment consists of a deployment app with inputs.conf where we specify sources and another deployment app  called 'output_app' with outputs.conf to get data forwarded to indexer cluster. Example logs from one of affected UFs: 06-29-2022 12:15:47.893 +0200 INFO DeployedServerclass - Serverclass=inputs_test_prod is uninstalling app=/opt/splunkforwarder/etc/apps/output_app 06-29-2022 12:15:47.893 +0200 INFO DeployedApplication - Removing app=output_app at='/opt/splunkforwarder/etc/apps/output_app' 06-29-2022 12:15:47.904 +0200 WARN DC:DeploymentClient - Restarting Splunkd... 06-29-2022 12:15:47.905 +0200 WARN Restarter - Splunkd is configured to run as a systemd service, skipping external restart process 06-29-2022 12:15:47.905 +0200 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_123.456.7.89_8089_z1il0123.xyz.ai_z1il0123.zyx.ai_95C4E8F1-731A-4280-9F09-93B03EAFB3DE 06-29-2022 12:15:48.206 +0200 INFO loader - Shutdown HTTPDispatchThread 06-29-2022 12:15:48.206 +0200 INFO ShutdownHandler - Shutting down splunkd 06-29-2022 12:15:48.206 +0200 INFO ShutdownHandler - shutting down level "ShutdownLevel_Begin" 06-29-2022 12:15:48.207 +0200 INFO ShutdownHandler - shutting down level "ShutdownLevel_FileIntegrityChecker" 06-29-2022 12:15:48.207 +0200 INFO ShutdownHandler - shutting down level "ShutdownLevel_JustBeforeKVStore" 06-29-2022 12:15:48.207 +0200 INFO ShutdownHandler - shutting down level "ShutdownLevel_KVStore" 06-29-2022 12:15:48.207 +0200 INFO ShutdownHandler - shutting down level "ShutdownLevel_DFM" 06-29-2022 12:15:48.207 +0200 INFO ShutdownHandler - shutting down level "ShutdownLevel_Thruput" 06-29-2022 12:15:48.207 +0200 INFO ShutdownHandler - shutting down level "ShutdownLevel_TcpInput1" 06-29-2022 12:15:48.207 +0200 INFO TcpInputProc - Running shutdown level 1. Closing listening ports. 06-29-2022 12:15:48.207 +0200 INFO TcpInputProc - Done setting shutdown in progress signal. outputs.conf  # Turn off indexing on the master [indexAndForward] index = false [tcpout] defaultGroup = splunk_prod forwardedindex.filter.disable = true indexAndForward = false [tcpout:splunk_prod] server=z1il0001.zyx.ai.zz:9997,z1il0002.zyx.ai.zz:9997, z1il0003.zyx.ai.zz:9997, z1il0004.zyx.ai.zz:9997, z1il0005.zyx.ai.zz:9997, z1il0006.zyx.ai.zz:9997 autoLB = true Have you ever encountered such an issue? How it is possible that serverclass gets rids off an app itself? Last changes that we did was a Deployment Server upgrade from 8.2.3.3 to 9.0, but we did it on 24.06.  Any idea what can be a root cause? Greetings, Dzasta
Hello I have an on prem indexer which i want to shot down and move all his context to another indexer is Azure What is the best practice for that ? Thanks
Hi, I'm trying to add splunk access to a user. I have a search which creates lookup with hosts names. It is created based on IP from _internal logs - I have a list of IP ranges. Now I wanted to cr... See more...
Hi, I'm trying to add splunk access to a user. I have a search which creates lookup with hosts names. It is created based on IP from _internal logs - I have a list of IP ranges. Now I wanted to created a role, with restrictions to hosts from lookup. I've tried to create a event type, but I can't use pipes there, to read lookup. I've also tried to use inputlookup command in role restrictions, but no luck.   Any Idea how to do it? Maybe other way, without lookup?  
I am trying to expand the table row with BaseRowExpansionRenderer from this tutorial:  https://dev.splunk.com/enterprise/docs/developapps/visualizedata/displaydataview/howtocreatecustomtablerow/ Bu... See more...
I am trying to expand the table row with BaseRowExpansionRenderer from this tutorial:  https://dev.splunk.com/enterprise/docs/developapps/visualizedata/displaydataview/howtocreatecustomtablerow/ But whenever i expand 1 row, the other opened one will closed Is it possible to expand multiple rows at the same time (currently only one row expandable at a time) Thank you very much !
I need help coming up with a query that can help create an IDPS/Internet Content Filtering dashboard in Splunk to continuously monitor the web traffic or pull reports when asked.
Help write a request what is the volume of logs in GB / MB goes to splunk per day / month
Dear community, Do you know a way to monitor flows from my servers, to aws cloud instances from the Splunk Cloud: Version 8.2.2203.3. Commands to know, how to identify the cause route, in the cpu ... See more...
Dear community, Do you know a way to monitor flows from my servers, to aws cloud instances from the Splunk Cloud: Version 8.2.2203.3. Commands to know, how to identify the cause route, in the cpu upgrade. Thanks in advance for your efforts. Have a good day. Cordially,
Hello everybody, I have a question for the community: Is there a reverse split command? I'll explain my problem: I have a: | eval Holidays = "01 / 01.01 / 06.08 / 15.11 / 01.12 / 08.12 / 25.12 /... See more...
Hello everybody, I have a question for the community: Is there a reverse split command? I'll explain my problem: I have a: | eval Holidays = "01 / 01.01 / 06.08 / 15.11 / 01.12 / 08.12 / 25.12 / 26.05 / 01.04 / 25.06 / 02" with the holidays that I want to remove from the day count (I create it, it can be a single value or a multivalue) now I have to add the current year: | eval year = strftime (now (), "% Y") and have this day excluded from the final count: | eval dates = mvrange (C3, now (), 86400) | eval dates = mvfilter (NOT match (dates, "(Excluded)")) | convert ctime (dates) timeformat = "% A" | eval dates = mvfilter (NOT match (dates, "(Saturday | Sunday)")) | eval noOfDays = mvcount (dates) I want to create an Excluded field that has holidays with the current year as value for example: Excluded = "1640991600.000000 | 1641423600.00000 | 1660514400.000000 | ........" It's possible? Is there a reverse split command? Tks Br
Hello community After a small "snafu" with new dashboards and version number, I noticed that after the rollout in our distributed environment there was, what seemed like, a local backup present:   ... See more...
Hello community After a small "snafu" with new dashboards and version number, I noticed that after the rollout in our distributed environment there was, what seemed like, a local backup present:     /opt/splunk/etc/apps/<appname>/default.old.20220705-235555/ The date lines up with the rollout of dashboards receiving a "This dashboard view is deprecated and will be removed in future versions of Splunk software" error.  Hence, I suspect these are connected in some way. So the dashboards were "repaired" by just dropping the version number by "1", though the "backup files" are still there. The only difference I notice are the install_source_checksum and the changes made to dashboards. So, is it OK to just delete this "backup" folder? If so, is there a preferred way to do so or just remove it?
Hi All, Recently I have upgraded Splunk to the latest version (9.0.0) on the DS & HF & AIO machines I have, everything was working just fine before upgrading anything, after upgrading the whole set ... See more...
Hi All, Recently I have upgraded Splunk to the latest version (9.0.0) on the DS & HF & AIO machines I have, everything was working just fine before upgrading anything, after upgrading the whole set of machines things went wrong, the main problem is in the HF, in the "Health Status of Splunkd" the "File Monitor Input" sign is Red for almost all of them as shown in the screenshot below:  Besides the following messages:   I have noticed that in the "Monitoring Console" -> "Indexing" -> "Performance" -> "Indexing Performance: Instance" the queue fill ratio is 100% on all the pipelines as shown in the screenshot below:   The server itself is not utilized, it has the following specs: OS: Red Hat Enterprise Linux Server release 7.9 (Maipo) CPU: 8 cores RAM: 64 GB Can anybody lead me to what is the cause of this problem? Much thanks Murad Ghazzawi.
Hi  I want to link to a episode and also a specific review dashboard, So I have created a dashboard without any filters set and have obtained a link including both emid and episodeid  (and even t... See more...
Hi  I want to link to a episode and also a specific review dashboard, So I have created a dashboard without any filters set and have obtained a link including both emid and episodeid  (and even the tabid I want) But when used, it goes to the correct episode, but ditches the dashboard. If I omit the episodid in the link, it shows the correct dashboard. Please advise.