All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have this search: index=xxx sourcetype="yyy" earliest=01/27/2020:08:00:00 latest=01/27/2020:18:00:00 | timechart p99(ResponseTime) as 99p | sort -99p | head 1 | addinfo | eval earliest1=st... See more...
I have this search: index=xxx sourcetype="yyy" earliest=01/27/2020:08:00:00 latest=01/27/2020:18:00:00 | timechart p99(ResponseTime) as 99p | sort -99p | head 1 | addinfo | eval earliest1=strftime(relative_time(info_min_time,"+1d") ,"%Y-%m-%d %H:%M:%S") | eval latest1=strftime(relative_time(info_max_time,"+1d") ,"%Y-%m-%d %H:%M:%S") | table _time 99p | append [ search index=xxx sourcetype="yyy" earliest=$earliest1$ latest=$latest1$ | timechart p99(ResponseTime) as 99p | sort -99p | head 1 | table _time 99p] How can I pass "earliest1" and "latest1" values from main search to second subsearch? is it possible?
I hope I explain this well. I have the following tstats search: | tstats max(_time) AS _time WHERE index=_internal sourcetype=splunkd source=*metrics.log by host I also have a lookup table ... See more...
I hope I explain this well. I have the following tstats search: | tstats max(_time) AS _time WHERE index=_internal sourcetype=splunkd source=*metrics.log by host I also have a lookup table with hostnames in in a field called host set with a lookup definition under match type of WILDCARD(host). In the data returned by tstats some of the hostnames have an fqdn and some do not. The problem becomes the order of operations. Say I do this: | tstats max(_time) AS _time WHERE index=_internal sourcetype=splunkd source=*metrics.log by host | lookup serverswithsplunkufjan2020 host OUTPUT host as match | eval "Sending Data?" = if(isnotnull(match), "Yes", "No") Then it gives me more hosts than I'm looking for. It'll indeed show where there is a match. When I search in this manner: | inputlookup serverswithsplunkufjan2020.csv | join type=left host [| tstats max(_time) AS _time WHERE index=_internal sourcetype=splunkd source=*metrics.log by host | eval seen=1] | eval "Sending Data?" = case(seen=1, "Yes", isnull(true), "No") | fields - seen It's using a join and who wants that? I can't seem to use lookup which I need for wildcards. I can use | inputlookup fine but I can't seem to make wildcard matching work there. I could do something like host IN ("foohost1*", "foohost2*") to search for what I need to gather, but I'd like to build something dynamic.
Good morning, I'm curious if anyone is willing to share their experience in building a successful Business Case for Splunk for ITops? Were there any areas where decision makers really saw the value... See more...
Good morning, I'm curious if anyone is willing to share their experience in building a successful Business Case for Splunk for ITops? Were there any areas where decision makers really saw the value in or resonated with? What did you focus on? If you already implemented the system, did you discover new use cases other then what is mentioned: https://www.splunk.com/en_us/it-operations.html I see there is great webinar with focus on for Security: https://www.splunk.com/en_us/form/learn-how-to-build-the-splunk-business-case-for-security/thanks.html
Hi all, I don't understand what is the minimal number of events in the itsi_summary generate by a single kpi. Someone can help me?
Hello, I have following case: I created a dashboard and an App for it and a role allowing only "read" of my dashboard. Now, users having this role would use my dashboard. But I would not li... See more...
Hello, I have following case: I created a dashboard and an App for it and a role allowing only "read" of my dashboard. Now, users having this role would use my dashboard. But I would not like them see my searches behind the panels. These are quite complicated SQL statements and I would like to keep them hidden from the endusers. Is there any way to forbid the access to the panel search but still allow the users working with the dashboard in the way that they can see the results? My second question would be: - I want this particular role to allow access only to this one specific app having the specific dashboards. However what I noticed is that many other Apps which I installed, admitted for a playground reasons, are shared Global and they have access to lot of data. Would that mean I have to go one by one through these Apps and try to revert the authorizations from Everyone back to the particular roles? Kind Regards, Kamil
I'm trying to figure out the sizing of a Splunk environment that will only be used for a very short time but by a substantial amount of users (20-40 workshop participants). All these users will be ru... See more...
I'm trying to figure out the sizing of a Splunk environment that will only be used for a very short time but by a substantial amount of users (20-40 workshop participants). All these users will be running searches simultaneously against the same index. The idea of the workshop is to build a dashboard to visualize some previously indexed data. In a way it's very similar to the official Splunk4Rookies workshop, just with different data. My concern is that the user experience will be terrible due to too many searches being attempted at same time. This raises the following questions: - how many concurrent searches need to be possible to support 30 users simultaneously building dashboards? - how far can I increase the max_searches_per_cpu parameter in limits.conf? what are the downsides? - assuming all data resides in one index (and all searches being run on this index) is this enough or should one index be replicated by implementing indexer clustering? how many searchable copies would be necessary? I'm hoping to be able to use an all-in-one Splunk instance (so no indexer clustering) but I have no means to realistically test the search performance/experience with 20-40 simultaneous users before the actual workshop. Has anyone have any experience with such a setup or does anyone know how Splunk does this for their Splunk4Rookies workshops? Thanks
Folks, I would like a help from you, here in the company where I work, Splunk has no way out of the internet. After a lot of conversation, I managed to convince the client to allow the tool to ... See more...
Folks, I would like a help from you, here in the company where I work, Splunk has no way out of the internet. After a lot of conversation, I managed to convince the client to allow the tool to access the internet. However, access is partially working. Today I can install a new app through Splunk web, but I can't update an app already installed. The firewall team asked me for the splunk domains for release. Below is the list I gave them: I would like to know if there is any other domain that I should request the release. § url = https://splunkbase.splunk.com/api/apps § loginUrl = https://splunkbase.splunk.com/api/account:login/ § detailsUrl = https://splunkbase.splunk.com/apps/id § updateHost = https://splunkbase.splunk.com § updatePath = /api/apps:resolve/checkforupgrade § https://telefonica.threatconnect.com/api
Hello, Working with Splunk 7.3.2. I have two multivalues that have a set of values in common: | makeresults | eval A="a,b,c,d,e,f,g,h,i,j", B="d,h,j,l,o,t,z" | table A B | makemv A delim=",... See more...
Hello, Working with Splunk 7.3.2. I have two multivalues that have a set of values in common: | makeresults | eval A="a,b,c,d,e,f,g,h,i,j", B="d,h,j,l,o,t,z" | table A B | makemv A delim="," | makemv B delim="," In this case the common values are d, h, j . What I'd like to do is create a new multivalue containing those values. The following search gets the job done, but it seems like a terrible way of doing so: | makeresults | eval A="a,b,c,d,e,f,g,h,i,j", B="d,h,j,l,o,t,z" | table A B | makemv A delim="," | makemv B delim="," | eval C = mvappend(A,B) | table C | mvexpand C | eventstats count by C | where count > 1 | dedup C | stats values(C) as C Can somebody give me some pointers/suggestions on how to make it more elegant and less resource consuming? Thanks! Andrew
Hi All, How i can merge two row value in one field. i am trying with case but i am not getting the output.
Hey, I have a question regarding timeouts and return codes when Splunk is shutting down a cluster peer on a Linux system. I ran a script that issues a "splunk offline", waits for the command to ... See more...
Hey, I have a question regarding timeouts and return codes when Splunk is shutting down a cluster peer on a Linux system. I ran a script that issues a "splunk offline", waits for the command to return, and then starts the next action unless the previous command comes back with a non-zero return code. If that happens, the script stops and asks for the user's input, to either abort, retry, skip, or continue. We encountered a situation where the offlining ran into a timeout and the command returned with Splunk still being in the process of terminating. However, the script started the next command (which then stopped the flow when it detected an inconsistency), indicating that we received a ERR_NOERR return code from Splunk. Is that expected Splunk behaviour? Short info about the environment: Splunk 6.6.5 (build b119a2a8b0ad) multisite Indexer-Cluster with 16 peers Thanks in advance!
Hello All, I have a query in my dashboard Routing_Location="$Routing_Location$" | fillnull | stats count(_raw) AS Attempts by ANI,Routing_Location | sort -Attempts The issue is that when so... See more...
Hello All, I have a query in my dashboard Routing_Location="$Routing_Location$" | fillnull | stats count(_raw) AS Attempts by ANI,Routing_Location | sort -Attempts The issue is that when someone puts in the test field for example "USA,Cellular_Verizon" no search comes up but when I put in "USA Cellular_Verizon" the search does come up I need a way to replace the comma with a space before the search take place (probably in the XML side) I have already tried | rex field=Routing_Location mode=sed "s/(\w+)([^\w]+)(\w+)([^\w]+)(\w+)/\1 \3 \5/" But that has no effect Thanks in Advanced I hope someone can help!
Hello, I need a help with counting the search results. I cannot use the following: | stats count as Total because the stats command destroys my output that I got from the database. On t... See more...
Hello, I need a help with counting the search results. I cannot use the following: | stats count as Total because the stats command destroys my output that I got from the database. On the simple example: | noop search_optimization=false| dbxquery query=" select now() from dummy " connection="HANA_MLBSO" | stats count as Total I get only a Total under the Statistics tab as a result, otherwise the date coming from now(). My goal is to trigger the custom command at the end of search, but only then if the search returned any results, be it events or dbxquery output. Something like that: index=mlbso sourcetype=isp_hanatraces secondary | stats count as Total | where Total > 0 | eval SEVERITY = 3 | eval AlertSource = "SPLUNK" | eval AlertText = "Test" | eval ShortText = "HANA crash" | eval SID = "SID" | eval DB = "DBSID" | eval host = host | eval _time = "TIME" | mycommand So, for the example above it would maybe even work as there are events returned, but for the DB output of the dbxquery not. Is there any elegant way to check if the result returned is not empty without deploying stats? Kind REgards, Kamil
Hello, I downloaded the funnel visualization to my local laptop from url https://splunkbase.splunk.com/app/3413/ , and follow the splunk offical guide to install this app under "Install App fr... See more...
Hello, I downloaded the funnel visualization to my local laptop from url https://splunkbase.splunk.com/app/3413/ , and follow the splunk offical guide to install this app under "Install App from file" in my local splunk Enterprise environment. When i browse to this tgz file and click upload. I got this error ''Error connecting to /services/apps/local: The read operation timed out". Please refer the screenshot for this error. How to resolve this?
So I have a string of IPs that are input and trying to figure out how to add the location on them which are stated in a csv. the input string varies and could looks like for example like this for ... See more...
So I have a string of IPs that are input and trying to figure out how to add the location on them which are stated in a csv. the input string varies and could looks like for example like this for each host: ip=1.1.1.1 ip=1.1.1.2|1.2.3.4 ip=1.1.1.5|1.4.4.6|1.2.4.6 meaning each could either have one ip or more, some of these ips are in the location csv some not so my table from the begining have these values and other empty fields that will be filled later, *hostname, ip, location, owner * The ones with information atm are hostname, ip, trying to add location with below, then add the other info after this code as its dependad on it: ** |inputlookup hostnames.csv |table hostname ip | eval ip = split(ip,"|") | eval numIPs = mvcount(ip) | eval iVal = mvrange(0,numIPs,1) . . . ...missing...code... . . . | lookup location_info ip_prefix AS ip OUTPUT location |table hostname ip location owner | eval location = if(location="NONE" OR location="Unknown", "Unkown", location) | streamstats count | mvexpand location | dedup count location | mvcombine location | fields - count | lookup owners.csv location OUTPUT owner | table hostname ip location owner** when there is 1 ip in the string, this works, but if there is more I have no clue at all how to solve it. I've tried mvexpand, mvcombine, foreach, with no luck or I'm using them wrongly. can someone share some insight in this?
Hi, I am getting a warning after running any search job "Eventtype 'wineventlog_security' does not exist or is disabled." There is a post regarding this (https://answers.splunk.com/answers/74421... See more...
Hi, I am getting a warning after running any search job "Eventtype 'wineventlog_security' does not exist or is disabled." There is a post regarding this (https://answers.splunk.com/answers/744214/eventtype-wineventlog-security-does-not-exist-or-i.html) and it mentioned there to check that this eventtype is shared globally, and they are globally shared. Would anyone know where else I should check? I am on version 8.0.0. Thanks and regards
I havedeployed the Phantom OVA and setup IP and server names according to my environment. I go to https://[myphantomserver] and get: 500: Server Error Sorry, something went wrong with your reques... See more...
I havedeployed the Phantom OVA and setup IP and server names according to my environment. I go to https://[myphantomserver] and get: 500: Server Error Sorry, something went wrong with your request the URL: https://[myphantomserver]/eula Seems it cannot display the license page? I am using a community license edition Can anyone shed some light I have set host names in '/etc/hosts' and /etc/'hostname' files many thanks in advance
hi all . I am trying to create a map where I can look at users max duration between logins who register with us between 2 fixed dates i.e. jan17-feb17. So i have the following which is interest... See more...
hi all . I am trying to create a map where I can look at users max duration between logins who register with us between 2 fixed dates i.e. jan17-feb17. So i have the following which is interesting but doesnt give the max length. | dedup eventId | stats count(_time) as appear_count, values(_time) as appear_dates max(_time) as last min(_time) as latest by customerNumber | eval first_appear=strftime(first,"%d/%m/%Y") | eval last_appear=strftime(last,"%d/%m/%Y") | eval appear_dates=strftime(appear_dates,"%d/%m/%Y") | eval duration=(last-latest) | eval duration=round((last-first)/86400) | where first<01/02/2019 For example i have a user that has used the service 400 times with a max break of about a week. So i needed the search to pick up the user where first appear = jan-feb2017 and then i need to know that this user has had at max a weeks break between accessing. Does this make sense. Its almost as if i need towrite the search to collect all users where first<28/02/2017. - and then i need to eval each event in order and subtract the later from the earlier,.. so for someone who accessed the service 5 times it would be USER ONE first=22/02/2017 event 1 22/02/2017 event 2 25/02/2017 (diff between event 2-1 = 3days) event 3 01/03/2017 (diff between event 3-2 = 4days) event 4 09/03/2017 (diff between event 2-1 = 8days) LAST event 5 10/03/2017 (diff between event 2-1 = 1day) Therefore max duration between events = 8days
Hi,  We are trying to implement business transaction monitoring using  AppDynamics for an application developed on Pega. When we see the Business Transaction Snapshots in AppD... See more...
Hi,  We are trying to implement business transaction monitoring using  AppDynamics for an application developed on Pega. When we see the Business Transaction Snapshots in AppD tool, it captures all the URL either with PRServlet or PRSOAPServlet, but from that we are not able to distinguish the different business transactions. Has anyone implemented this? Is there any way to distinguish the different business transaction.  .
I am trying to output a csv by executing a lot of queries using the report function for splunk cloud. At the same time, we are also using the alart function for operation monitoring, so we are stud... See more...
I am trying to output a csv by executing a lot of queries using the report function for splunk cloud. At the same time, we are also using the alart function for operation monitoring, so we are studying how to respond if the query stays. Is it possible to create a priority and suspend or delete low-priority query processing? Sorry to trouble you, but thank you.
DBConect retrieves data from multiple tables. Regarding the acquisition, the processing time is shifted considering the load on the DB. Also, as for the acquisition method, I want to take only th... See more...
DBConect retrieves data from multiple tables. Regarding the acquisition, the processing time is shifted considering the load on the DB. Also, as for the acquisition method, I want to take only the updated amount by looking at the ID, so I set Rising Input Type The following SQL is executed. SELECT * FROM "zaif". "Public". "Table name" WHERE id>? ORDER BY id ASC However, from the development side There was a request that even if the import process was shifted, the end of the data imported that day should be the same. For example, 5:10, 5:20, 5:30 even if you start the capture process sequentially, I want to finish all data at 5:00. Also on the next day, we want to get the minutes from 5:00 on the previous day (so that there are no missing updates). In that case, please teach us how to process the import.