All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've been trying to create query of the following type: … some base search... | timechart span=1d count with the trending value comparing the count right now with the count 24 hours ago. Un... See more...
I've been trying to create query of the following type: … some base search... | timechart span=1d count with the trending value comparing the count right now with the count 24 hours ago. Unfortunately this is not working, as the trend happens to compare a count for today's date (a partial day) with the count for the whole day yesterday. I read answers to similar questions: 1) https://answers.splunk.com/answers/333319/how-to-create-a-search-to-show-a-trending-single-v.html and 2) https://answers.splunk.com/answers/86659/timechart-day-offset.html, which led me to believe that I need to offset the time to get this working. So my current query looks like: … base search | timechart span=1h count | addinfo | eval hour_of_time = strftime("%H",info_search_time), eval _time = _time - (hour_of_time * 3600) | timechart span=1d sum(count) as count to which I believe I need to add an eventual _time = _time + (hour_of_time * 3600) . Since the hour_of_time field is gone from the result of the query above, I tried appending the following to the query again: |addinfo |eval hour_of_time=strftime("%H",info_search_time)| eval _time = _time + (hour_of_time * 3600) However, the results: - include a _time column with no values in it - do not include the 'hour_of_time' field What am I missing?
Is it possible to configure the expiry time for a scheduled report? I've checked the documentation and savedsearches.conf and it doesn't seem to be any options for that.
Setting tokens in Splunk programatically in Splunk via JS is pretty easy, as documented at: https://dev.splunk.com/enterprise/docs/developapps/webframework/binddatausingtokens/getandsettokenvalues/... See more...
Setting tokens in Splunk programatically in Splunk via JS is pretty easy, as documented at: https://dev.splunk.com/enterprise/docs/developapps/webframework/binddatausingtokens/getandsettokenvalues/ However, the code snippet at that link appears to have no effect on tokens with true I have developed the following workaround... require([ "underscore", "jquery", "splunkjs/mvc", "splunkjs/mvc/simplexml/ready!" ], function (_, $, mvc) { var tokens = mvc.Components.get("default"); var tokenName = "myTokenName"; var tokenValue = "myTokenValue"; tokens._events[`change:${myTokenName}`][0].context.attributes.selectFirstChoice = false tokens.set(myTokenName, myTokenValue); }); Evidently however, this is a hack, as it reaches into the _events attribute which is marked as not a public API. Thus, I would like a more elegant solution. Any ideas? Cheers
Hi, I am attempting to get some analytics from the Netscaler into Splunk via an Independent Forwarder using AppFlow policies on the Netscaler. I have followed this document to install and configu... See more...
Hi, I am attempting to get some analytics from the Netscaler into Splunk via an Independent Forwarder using AppFlow policies on the Netscaler. I have followed this document to install and configure the Independent Forwarder: https://docs.splunk.com/Documentation/StreamApp/7.2.0/DeployStreamApp/InstallStreamForwarderonindependentmachine I then followed this one to setup the above Independent Forwarder so it could receive the IPFIX data from the Netscaler AppFlow policy: https://docs.splunk.com/Documentation/StreamApp/7.2.0/DeployStreamApp/UseStreamtoingestNetflowandIPFIXdata When I applied the Netscaler AppFlow policy to a virtual server data was not coming through. I tail -f the streamfwd.log and it was indicating that it did not have have the required templates to decode the netflow. I amended the template refresh interval on the Netscaler to 60 seconds and sure enough, not too long after that, the data was making its way into the specified index. When I search the index where the data is going to (index="netscaler"), it seems the Netflow elements are not being decoded. I have basic information such as source ip and destination ip, but all other data, I suspect, is locked away under the netflow_elements: field, which contains no human readable data. https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/ConfigureIPFIXinputs This document says to set the source type to citrix:netscaler:ipfix, and i did on the httpinput inputs.conf, but this appears to have no effect, as the source on the aforementioned events is simply stream:netflow. Any assistance would be greatly appreciated. Regards, David
Hi, I am trying to show the difference in percentage between two values in my log: Tx_Mbps and Input_Rate_Mbps and display the percentage as a Single Value. When I used the query below in the "Se... See more...
Hi, I am trying to show the difference in percentage between two values in my log: Tx_Mbps and Input_Rate_Mbps and display the percentage as a Single Value. When I used the query below in the "Search & Reporting" I noticed that there is a difference between verbose (I get 44%) and fast(i get 68%) mode. Then when i applied this query in my dashboard, i always get 68%. So i assume it is not in verbose mode. Is there any way to force my dashboard to search in verbose mode? These are the things i tried: Used eventstats instead and added "|fields *|". I tried this in all positions throughout the query. I still get non-verbose results Saw a suggested solution to add this in my dashboard simpleXML: "param name="searchModeLevel"verbose". Able to advise where should i add this in? I tried adding this in the tag within my panel. Still get non-verbose results This is my query in the dashboard: sourcetype=csv index=*portstats* OR index=*q_health* | eval Tx_Mbps=max(Tx_Mbps,0) | eval Input_Rate_Mbps=max(Input_Rate_Mbps,0) | stats avg(Tx_Mbps) as h avg(Input_Rate_Mbps) as q | eval diff = ((h-q)/q)*100 | table diff Thanks in advance!
Hi team, There is following errors with my Splunk healtch check. "The number of extremely lagged searches (1) over the last hour exceeded the red threshold (1) on this Splunk instance" Do you ha... See more...
Hi team, There is following errors with my Splunk healtch check. "The number of extremely lagged searches (1) over the last hour exceeded the red threshold (1) on this Splunk instance" Do you have any idea what I should do ?
I have this log : <LST> <S>Watch</S> <S>Move</S> <S>Delete</S> <S>Flip</S> </LST> And I want to extract this part with rex syntax : <S>Watch</S> ... See more...
I have this log : <LST> <S>Watch</S> <S>Move</S> <S>Delete</S> <S>Flip</S> </LST> And I want to extract this part with rex syntax : <S>Watch</S> <S>Move</S> <S>Delete</S> <S>Flip</S> But I am not having success , I think is because the specials characters. Thank you in advance
Is the installation file same for setting up splunk search head, indexer and deployment server?
There are lots of posts here that are similar to what I'm working on, but none of them gave completely applicable answers to our situation. Last year I created a couple of reports for Splunk licen... See more...
There are lots of posts here that are similar to what I'm working on, but none of them gave completely applicable answers to our situation. Last year I created a couple of reports for Splunk license usage on our DMC which I then embedded in a dashboard on our SHC for management. It worked fine on our 7.3 SHC and I never had to do any of the embedSecret or allowEmbedTokenAuth changes I see other people referring to. In first quarter of this year we built a new 8.0.1 SHC on beefier Linux hardware and we discovered yesterday that those (and other) embedded reports do not display. The embedded links all still work if you use them on their own in a browser and I've even tried disabling then re-enabling the embedding of the source report then using the fresh iframe code over on my SHC dashboard. I don't see anything relevant in the release notes (although I've been at this about 10 hours, so maybe I'm not seeing straight anymore). Can anyone point out whatever stupid mistake I've made? <dashboard> <label>Embedded Test</label> <description>North America Data Centers</description> <row> <panel> <title>License Usage Previous 30 Days</title> <html> <iframe height="336" style="width:100%" frameborder="0" src="http://my.dmc.com/en-US/embed?s=%2FservicesNS%2Flycollicott%2Fsplunk_monitoring_console%2Fsaved%2Fsearches%2FLicense%2520Usage%2520-%2520Previous%252030%2520Days%2520%2528embed%2529&amp;oid=klRHC2gemHbP9zFz82hVKNUo_x1hXx9tky9z%5EzfAjQAaNkHcWrTPogHmC6ygj6kM8oRcsktQSvYow32fl_lQam%5ERVQVUbMfnEoD_KpZiwFePhqqJkgyDa5X1VmXcHKav47M7jCJJKlbn3%5EYr2Ng_VCZFk0mXFJueCiw50oYpuoyNB2vFAxOj_RvMcMz0hjhUddsmHd31UmWO7j42CMq7WxHOYeY"/> </html> </panel> </row> </dashboard>
I have a cluster of Windows indexers. I need to backup my new warm buckets every day and cannot afford to wait until buckets roll to frozen to back them up. I was thinking of creating a script that w... See more...
I have a cluster of Windows indexers. I need to backup my new warm buckets every day and cannot afford to wait until buckets roll to frozen to back them up. I was thinking of creating a script that would copy out my warm buckets right from my Splunk storage and send them to my cold storage system. However, I was warned that this could interfere with Splunk trying to read from those buckets while I am copying them. My second thought was to use VSS shadow copies. I can have a script that creates a VSS shadow copy of my storage drive on my indexers, and then copies the warm buckets from the shadow copy, instead of copying from the original data. But my understanding of VSS is that it also will prevent Splunk from reading any data while the shadow copy is being made. I have seen documentation referencing the use of VSS to back up warm buckets from Splunk, but nothing that goes into detail on how it works or how it should be implemented. Here are my questions: Is Splunk VSS aware? Will VSS interfere with Splunk reading data from the warm buckets? Am I right in thinking that copying buckets right from Splunk storage, with robocopy for example, will interfere with Splunk being able to read data? Is there any other recommended way to back up Splunk storage without waiting for the buckets to roll to frozen?
I have this query to list the apps and their versions last update date for apps on all index nodes, however the updated date lists a default for all apps as "1969-12-31T19:00:00-05:00". Anyway to mod... See more...
I have this query to list the apps and their versions last update date for apps on all index nodes, however the updated date lists a default for all apps as "1969-12-31T19:00:00-05:00". Anyway to modify this to produce the proper updated date? | rest /services/apps/local | search disabled=* |table splunk_server, title, label, version, updated, disabled, visible, description, author, configured, core, "eai:acl.app", "eai:acl.sharing", id thanks in advance for any assistance... Rich
Splunk Enterprise 8.0.2 I can send an email through our enterprise relay using python3 smtplib email.message. These come through with Content Transfer Encoding set to 7bit. When I setup and trigger... See more...
Splunk Enterprise 8.0.2 I can send an email through our enterprise relay using python3 smtplib email.message. These come through with Content Transfer Encoding set to 7bit. When I setup and trigger an email alert action through Splunk, it fails to relay through and the Content Transfer Encoding is set to base64. That is the only difference I can detect between the 2 emails using wireshark. Is there a way to change the Splunk alert email Content transfer encoding to 7bit? I have looked at sendemail.py and sendemail_handler.py and cannot see where this is specified, it may be in another conf file or perhaps needs to be explicitly defined in one of those 2 .py files? Thank you for any help. RASmith
Hi All, I am trying to use RedShift to store all my Splunk logs, it it possible?
Hi , On a standalone SH , we are pulling OKTA logs using OKTA Identity cloud app. Need to filter events based on the email address . For example anything with *gmail.com should not be indexed. ... See more...
Hi , On a standalone SH , we are pulling OKTA logs using OKTA Identity cloud app. Need to filter events based on the email address . For example anything with *gmail.com should not be indexed. Put props.conf and transforms .conf in location - C:\Program Files\Splunk\etc\apps\TA-Okta_Identity_Cloud_for_Splunk\local props.conf [OktaIM2:log] TRANSFORMS-set= setnull transforms.conf [setnull] REGEX=gmail.com DEST_KEY=queue FORMAT=nullQueue But still events are not getting filtered . Any suggestions?
I have this search/report: host=app-dev-001 terminating OR rehire | convert timeformat="%Y-%m-%d" ctime(_time) AS date | table date rehire term_user This gives me this result: I would l... See more...
I have this search/report: host=app-dev-001 terminating OR rehire | convert timeformat="%Y-%m-%d" ctime(_time) AS date | table date rehire term_user This gives me this result: I would like to get term_user values to start showing up on row 1. Is there something like python's zip_longest function? import itertools for u1, u2 in itertools.zip_longest(l1, l2): ... print(u1, u2) ... omikusarl ahubshs chasinnb egathnls yeanvked mfdhaaar kkldjuga iuvdcahe aarehdv swusrbib vikdho3n rcathrki None jduakdf None loidjht
Hi, I'm trying to filter the results of the lookup depend upon the time selection from the dashboard. I have date field in the lookup. Below is the sample of the lookup. ReportedAt Id status ... See more...
Hi, I'm trying to filter the results of the lookup depend upon the time selection from the dashboard. I have date field in the lookup. Below is the sample of the lookup. ReportedAt Id status 2020-04-09 5:00:00 567 Pass I'm trying with below logics but it is not working. |inputlookup file.csv |eval timeep = strptime('ReportedAt', "%Y-%m-%d %H:%S") | addinfo | where timeep > info_max_time and timeep < info_min_time ==== inputlookup file.csv | where Reportedat > $t1.earliest$ and Reportedat < $t1.latest$ Can you please let me know is there a way to display the results depend upon time selection in the dashboard.
I have a script that writes data that looks like this to a log file. I have this search: host=sfo-app-dev-001 terminating OR new_hire OR rehire OR "changes supervisor" and I get these result... See more...
I have a script that writes data that looks like this to a log file. I have this search: host=sfo-app-dev-001 terminating OR new_hire OR rehire OR "changes supervisor" and I get these results: "2020-04-08 17:34:53,589:INFO: User id 135062 (hgevpsar) changes supervisor from klaurns/id=14654 to fakesuper/id=42", ... ... "2020-04-08 17:34:53,574:INFO: User id 854526 (loovkosg) changes supervisor from eisetpl/id=446070 to fakesuper/id=42", ... "2020-04-08 17:34:52,892:INFO: rehire pabisanh.", ... ... "2020-04-08 17:34:52,891:INFO: rehire dadhre.", ... "2020-04-08 17:34:52,214:INFO: new_hire grdorimg.", ... ... "2020-04-08 17:34:52,214:INFO: new_hire bokdtaua.", ... "2020-04-08 17:34:51,514:INFO: terminating hluhsha", ... ... "2020-04-08 17:34:51,496:INFO: terminating auamjmo", ... I would like to generate a report that puts the all the terminated users, new hire users, re-hired users and supervisor changes into a report that has columns for the terminated users, new hire, re-hires and supervisor changes. (Sorry for the crappy formatting) Terminations New Hires Re-hires Super Changes | hluhsha | grdorimg | pabisanh | (hgevpsar) changes supervisor from klaurns/id=14654 to fakesuper/id=42 | | auamjmo | bokdtaua | wjtorkuo | (forecscf) changes supervisor from bucreah/id=62931 to fakesuper/id=42 | | arkgmu2i | tsoh | - | (kaprsaer) changes supervisor from cstiobs/id=127168 to fakesuper/id=42 | | ivargda | lkrnluei | | (nfntecoo) changes supervisor from arhreinn/id=561422 to fakesuper/id=42 | | | ontaguh | | | | | oaomkha | | | I have tried this search: host=sfo-app-dev-001 terminating OR new_hire OR rehire OR "changes supervisor" | table term_users newhires rehires super_changes But I really do not understand how to create custom fields. I have tried to use the "Extract New Fields" wizard but cannot seem to get it to do what I need.
Hello, Excuse my lack of expertise with Splunk. Could you please let me know how i can track when a specific user logon and logoff from the computer? I am using a universal forwarder to the dc onl... See more...
Hello, Excuse my lack of expertise with Splunk. Could you please let me know how i can track when a specific user logon and logoff from the computer? I am using a universal forwarder to the dc only for the security logs. I can see that i have inside splunk server a lot of events. So it must be working. Thank you
I have set of events as below: EmployeeID Company C123 ABC C456 DEF C789 2598 3648 Here, all the EmployeeID starting wi... See more...
I have set of events as below: EmployeeID Company C123 ABC C456 DEF C789 2598 3648 Here, all the EmployeeID starting with C are Contractors and some of them have Company values. Now I want to achieve 2 things 1. I want to Populate "Unknown" where the EmployeeID starts with C but there is no Company value. 2. And for all the other EmployeeID (not starting with C), I want to populate Fulltime. Thanks in advance!!
Hello all, In Enterprise Security I need to write searches for below scenario can some help in writing this? 1.Search that shows hosts that have had more and more vulnerabilities over three ... See more...
Hello all, In Enterprise Security I need to write searches for below scenario can some help in writing this? 1.Search that shows hosts that have had more and more vulnerabilities over three months. The intent is to find servers that have never been patched a.There should actually be 2 searches, one for workstations and one for servers 2.Incorporate software versions from Rapid7 and show vulnerabilities per version. This should then allow for a way to view the servers that are associated with these most vulnerable software versions a.There should actually be 2 searches, one for workstations and one for servers Thanks in advance