All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am very new to SPLUNK and inherited an environment without much documentation. Can anyone help with the following queries.  List of all UF hosts and the HF each of them forwards to List of a... See more...
Hi, I am very new to SPLUNK and inherited an environment without much documentation. Can anyone help with the following queries.  List of all UF hosts and the HF each of them forwards to List of all HF and the Indexer each of them forwards to List of all indexers   Thank you .
I am trying to create a role to allow a user the rights to enable and disable the inputs within the Splunk Add-on for Microsoft Office 365 without giving them admin rights. What capabilities would I ... See more...
I am trying to create a role to allow a user the rights to enable and disable the inputs within the Splunk Add-on for Microsoft Office 365 without giving them admin rights. What capabilities would I assign to the new role?
Hello all....   I started a Splunk class awhile back and coming back to it now.  The initial steps, of course, are/were to download Splunk (enterprise).  It appears my license has expired and I nee... See more...
Hello all....   I started a Splunk class awhile back and coming back to it now.  The initial steps, of course, are/were to download Splunk (enterprise).  It appears my license has expired and I need an admin to update the license?   This is for educational purposes only.  How might I go about being able to utilize Splunk for this class?  use a different email/log-in?   Thank you
Hi Splunkers,  some examples from our logs..  [Time:11:03:01] [Function:upload] [User:aaa] [Time:11:03:10] [Function:upload] [User:aaa] [Time:11:03:15] [Function:upload] [User:ccc] [Time:11:05... See more...
Hi Splunkers,  some examples from our logs..  [Time:11:03:01] [Function:upload] [User:aaa] [Time:11:03:10] [Function:upload] [User:aaa] [Time:11:03:15] [Function:upload] [User:ccc] [Time:11:05:30] [Function:upload] [User:aaa] and my search | bin _time span=1m  | dedup _time   I want to count the events per 1min. or let's say deduplicate if other events are in 1 minute but only for same users.  and expect the result like this that user "CCC"  don't filtered.  [Time:11:03:01] [Function:upload] [User:aaa] [Time:11:03:30] [Function:upload] [User:ccc] [Time:11:05:10] [Function:upload] [User:aaa]   But my search filtered also other user's events and the result is like that.  [Time:11:03:01] [Function:upload] [User:aaa] [Time:11:05:30] [Function:upload] [User:aaa] This is only example logs that means users are not only two but over hunderes.  Can somebody help me how should i search?    Thanks 
Hi There, Need help to find the  status code error rate  where  status code is >400. I have below Query to time chart the error rate  , which works fine... index=apache_core  userAgent!="nginx/*" ... See more...
Hi There, Need help to find the  status code error rate  where  status code is >400. I have below Query to time chart the error rate  , which works fine... index=apache_core  userAgent!="nginx/*" source="*access.log*"  requestURI!="/web/app*" NOT (requestURI="/api/xyz/*"  OR requestURI="/api/yyy/*"  AND statusCode=404) earliest=-30m latest=now | timechart span=5m limit=0 eval((count(eval(statusCode>=400)) / count()) * 100) as ErrorRate   But , to  create an alert , I don't want the time chart  , just the  error rate  in last 30 mins.   the stats count with the eval statement doesn't work.   Thanks, DD  
Hi I have one index with two sources (source=source1 and source2). Both events have two common fields (common_field1 and common_field2). The events with source_1 have three fields (source1_field1, ... See more...
Hi I have one index with two sources (source=source1 and source2). Both events have two common fields (common_field1 and common_field2). The events with source_1 have three fields (source1_field1, common_field1, common_field2). The events with source_2 have three fields (source2_field1, common_field1, common_field2). I tried the following without success: (source=source1 OR source=source2) | table common_field1, common_field2, source1_field1, source2_field1 There are more events in source1 than in source2. The table should have one row per source1 event. Source2's events will be used based on the common fields. There will be many instances where the same source2 event is used.
We have about 100 servers that we want to monitor 1 file. I'd like to be able to have 1 monitor stanza, that can loop through a list of servers, i.e. "monitor://serverlist.txt\directory\logfile.txt" ... See more...
We have about 100 servers that we want to monitor 1 file. I'd like to be able to have 1 monitor stanza, that can loop through a list of servers, i.e. "monitor://serverlist.txt\directory\logfile.txt" Is this possible or is it possi
Hello everyone, I am trying to upgrade my all-in-one Splunk Enterprise which is actually in version 7.2.4.2 to the latest version 8.0.5. According to the documentation here: https://docs.splunk... See more...
Hello everyone, I am trying to upgrade my all-in-one Splunk Enterprise which is actually in version 7.2.4.2 to the latest version 8.0.5. According to the documentation here: https://docs.splunk.com/Documentation/Splunk/8.0.5/Installation/HowtoupgradeSplunk I can do it without any mid upgrade. I tried to upgrade according to this documentation: https://docs.splunk.com/Documentation/Splunk/8.0.5/installation/UpgradeonWindows I checked with the App "Splunk Platform Upgrade Readiness App", I do not have any critical points. Only warning which I can handle right after updating. Here now what I am doing in order to upgrade my Splunk Enterprise. I downloaded the latest version on https://splunk.com/downloads and then stop the entire Splunk with the command : "$SPLUNK_HOME$/bin/splunk stop" to be sure that nothing can create a conflict during the upgrade (according to the official documentation it is not mandatory contrary to a Linux server). My Splunk is installed on an attached drive which is located in "S:\", I modified my props.conf in order to change the environment variable $SPLUNK_HOME$. Through both method (GUI and CLI) it is returning these screens: I read on other posts to run as admin the CLI and used this command: msiexec.exe /i splunk-8.0.5-a1a6394cc5ae-x64-release.msi /l*v S:\TEMP\Splunkinstall.log INSTALL_DIR="S:\Splunk" I am trying to upgrade with the same local admin account that I used on my first installation. I guess the wizard detected that a Splunk is already installed because I only have these 2 screens: And here is this pop-up error which is displaying 3 times in a row: Then it starting to "Copying new files" in the C:\ hard drive (where my OS is installed) The log file generated through the installation is very verbose but I did not find anything interesting in it when the pop-up "installation failed" happened. Do I need to overwrite my "S:\Splunk" folder with the "C:\Program Files\Splunk"? I guess that this installer is made in order to avoid this messy upgrade. What can I do in order to upgrade my Splunk installed in my "S:\" hard drive?
Recently I noticed that an important field is not being auto extracted with the _json sourcetype while all other attributes are still being extracted as fields just fine.  In the example below, the P... See more...
Recently I noticed that an important field is not being auto extracted with the _json sourcetype while all other attributes are still being extracted as fields just fine.  In the example below, the Properties.CorrelationId is not available and attempting to run stats on it produces no results.  This has always worked, what would cause this?       { "Level":"Error", "MessageTemplate":"SPC Fulfillment controller has reported an error with message: [{httpResponseMessage}], code: [{httpResponseCode}] and status code [{httpResponseStatusCode}]", "RenderedMessage":"SPC Fulfillment controller has reported an error with message: [\"Server will not process, error in request. SKU not found [1105716399999].\"], code: [\"015-002-017\"] and status code [400]", "Properties":{ "httpResponseMessage":"Server will not process, error in request. SKU not found [1105716399999].", "httpResponseCode":"015-002-017", "httpResponseStatusCode":400, "EndpointVersion":"v2", "SourceContext":"SPC.Services.Fulfillment.API.Controllers.OrdersController", "ApplicationName":"fabric:/spc/fulfillment", "ApplicationTypeName":"SPC.Services.Fulfillment", "CodePackageVersion":"2.81.0.2020072462946-08d393d6", "ServiceName":"fabric:/spc/fulfillment/API", "ServiceTypeName":"SPC.Services.Fulfillment.APIType", "InstanceId":132406486505333708, "PartitionId":"898c1f6a-ab4e-4c96-81f4-da999f2eb0f1", "ServiceManifestVersion":"2.81.0.2020072462946-08d393d6", "NodeName":"_sbp01-1FE_3", "CorrelationId":"abb55590-1527-f9c2-d919-8ea586f1083a", "Environment":"p01-1" } }    
Questions in the inputs.conf [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = true renderXml = 1 I see the default is to disable XML however there are vague references to XM in the... See more...
Questions in the inputs.conf [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = true renderXml = 1 I see the default is to disable XML however there are vague references to XM in the doc.   I saw it source types it to XML in the next line. source = XmlWinEventLog:Microsoft-Windows-Sysmon/Operational Are both types possible and is XML preferred or recommended?  I was looking for some advice and didn't see any in the doc.  Thank you.
Hi All,   Need help in getting the data for those Downtime > 15 mins. below is the query am using.     index=opennms "uei.opennms.org/nodes/nodeUp" OR "uei.opennms.org/nodes/nodeDown" | fieldfor... See more...
Hi All,   Need help in getting the data for those Downtime > 15 mins. below is the query am using.     index=opennms "uei.opennms.org/nodes/nodeUp" OR "uei.opennms.org/nodes/nodeDown" | fieldformat Time=strftime(Time,"%Y-%m-%d %l:%M:%S %p") | sort- Time | eval Downtime = tostring(now() - Time, "duration") | rex field=Downtime "(?P<Downtime>[^.]+)" | table Hostname Status Classification "Site Code", sitename, Time Downtime       output:  Device name Down Bronze LHC Luanda 2020-08-05 2:02:40 PM 00:14:45  
Hi  Need help on my query, I want to achieve this kind of table shown below What I want is to get the total_count value for each app by adding the values under count and get sum of it under total_c... See more...
Hi  Need help on my query, I want to achieve this kind of table shown below What I want is to get the total_count value for each app by adding the values under count and get sum of it under total_count   app dest_port count total_count ssl 10001 10020 13000 13006 22790 26107 443 44345 4 21 2 3 2 8 19 22 55 323 ? web-browsing 1000 21 443 5000 7788 80 8003 8080 2 3 4 7 1000 200 12 21 ?  
If suppose i have two Phases with first and last date Phase 1=1 JAN 2020, 1 March 2020 Phase2=1Apr 2020,1jun 2020 if i get execution date as 3 Feb 2020 then my verified  Column should be displayed... See more...
If suppose i have two Phases with first and last date Phase 1=1 JAN 2020, 1 March 2020 Phase2=1Apr 2020,1jun 2020 if i get execution date as 3 Feb 2020 then my verified  Column should be displayed as Phase 1 otherwise Phase 2  Example: id verified  execution Date 1 Phase1 3-Feb-20   2 Phase1 4-Feb-20   3 Phase1 5-Feb-20    
I have different Fields values like - teamNameTOC, teamNameEngine  under same field Name(teamName) want to merge these two values in single report. I have tried below and output also attached teamN... See more...
I have different Fields values like - teamNameTOC, teamNameEngine  under same field Name(teamName) want to merge these two values in single report. I have tried below and output also attached teamName=DA OR teamName=DBA OR teamName=Engine OR teamName=SE OR teamName=TOC | top limit=50 teamName OUTPUT  teamName count percent TOC 233 50.000000 Engine 84 18.025751 DA 66 14.163090 SE 55 11.802575 DBA 28 I need all above values Count (team name , count , %) in one row as single entity. % should adjust itself if add new more values. Output should look like -  teamName count percent All Teams 466 100.00      
Can we use Splunk as the Syslog server? if Yes then what are the Pros and cons of using Splunk as the Syslog server?
i want to take off the settings menu from the bar menu when i log in with a simple user 
Hello All, I am looking for a solution to establish a kind of IT inventory, based on logins. Is there any working solution that matches users to devices they logged into? Available source is AD... See more...
Hello All, I am looking for a solution to establish a kind of IT inventory, based on logins. Is there any working solution that matches users to devices they logged into? Available source is AD and wineventlog:security Thank you for any ideas for a solution.
In my scenario I would like that if a normal user logs in the "settings" button is disabled(hidden), but if it is the administrator it's displayed. splunk version 8
Hi All, Am hitting a API service which has 7 to 8 backend calls, now i get all the backend call response times as well in my query Problem statement My API is calling 7 to 8 backend calls, the sam... See more...
Hi All, Am hitting a API service which has 7 to 8 backend calls, now i get all the backend call response times as well in my query Problem statement My API is calling 7 to 8 backend calls, the samebackend is called by different APIs. During single hit, i see a message-id is triggered which is same in all backends, but during huge load like 3000 transactions in one hour, how do i construct a query with get messageid from parentid call which is traversed in backend call.
Hi for some reason fieldformat didn't work with foreach x,y,z. Sometimes it works mostly didn't. Here is same which didn't work in at least our Splunk 7.3.3 or 8.0.5. Any hints is welcome.   index... See more...
Hi for some reason fieldformat didn't work with foreach x,y,z. Sometimes it works mostly didn't. Here is same which didn't work in at least our Splunk 7.3.3 or 8.0.5. Any hints is welcome.   index=_* earliest=-w@w latest=@d| fields _indextime, _time | eval lat=_indextime - _time | bin span=1w _time | stats count as Events avg(lat) as AvgLat max(lat) as MaxLat min(lat) as MinLat by _time | eval AvgLatMins = round (AvgLat/60, 0), AvgLatHrs = round (AvgLatMins / 60,0), AvgLat = round (AvgLat, 0), MaxLat = round(MaxLat ,0) | foreach AvgLat MinLat MaxLat [eval <<FIELD>> = if (<<FIELD>> < 0, 0, <<FIELD>>) | fieldformat <<FIELD>> = tostring (<<FIELD>>, "duration")]    When I change fieldformat to eval it works or if I do fielformat for individual fields one by one it works. And no changes even I try " and ' with <<FIELD>> (shouldn't need based on those field names). r. Ismo