All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I am trying to integrate MS SQL audit log data with a UF instead of DB Connect.  What is the best and recommended way to do it that maps all fields?  At the moment it is integrated with... See more...
Hi all, I am trying to integrate MS SQL audit log data with a UF instead of DB Connect.  What is the best and recommended way to do it that maps all fields?  At the moment it is integrated with the UF and using the "Splunk Add-on for Microsoft SQL Server" With that the MS SQL events can be identified by SourceName=MSSQLSERVER or SourceName=MSSQL* However it does not work properly work as most of the fields are not extracted and mapped. For example the user is also not translated User= NOT_TRANSLATED
Hi Splunkers, I'm struggling with setting up an appropriate line breaker for data from log file.  The example is below. I tried to use Event-breaking policy set to "every line", but it doesn't work... See more...
Hi Splunkers, I'm struggling with setting up an appropriate line breaker for data from log file.  The example is below. I tried to use Event-breaking policy set to "every line", but it doesn't work fine as the last line consists of 3 events. I would like to break lines based on [abcdef.abcs][info][gc], but I'm not entirely sure whether it's possible.  Could you please take a look?  [883722.688s][info][gc] GC(40135) Pause Init Mark (process weakrefs) 1653.109ms [883734.774s][info][gc] GC(40135) Concurrent marking (process weakrefs) 12086.056ms [883736.181s][info][gc] GC(40135) Concurrent precleaning 1406.445ms [883738.907s][info][gc] GC(40135) Pause Final Mark (process weakrefs) 2724.588ms [883738.908s][info][gc] GC(40135) Concurrent cleanup 72424M->72273M(153600M) 0.229ms [883739.217s][info][gc] GC(40135) Concurrent evacuation 308.624ms [883739.217s][info][gc] GC(40135) Pause Init Update Refs 0.137ms [883742.192s][info][gc] GC(40135) Concurrent update references 2975.050ms [883742.195s][info][gc] GC(40135) Pause Final Update Refs 1.175ms [883742.196s][info][gc] GC(40135) Concurrent cleanup 80318M->62137M(153600M) 0.204ms [883742.197s][info][gc] Trigger: Allocated since last cycle (15943M) is larger than allocation threshold (15360M) [883742.224s][info][gc] GC(40136) Concurrent reset 26.618ms [883743.575s][info][gc] GC(40136) Pause Init Mark 1349.467ms
Hi everyone, is there an official document for the necessary api permissions? Or does anyone know about these permissions? Thank you
Hi,I'm unable to install app on my Splunk Cloud Platform evaluation! Is this a limitation? E.g can't test / evaluate application on Splunk Cloud Platform evaluation? Thanks for clarifying.  
Hi  My system is Linux.  Am trying to monitor 3 users in an index.  The last time they login, IP address etc. There are over 180+ user. How do I get the search to show just the three users I want e... See more...
Hi  My system is Linux.  Am trying to monitor 3 users in an index.  The last time they login, IP address etc. There are over 180+ user. How do I get the search to show just the three users I want e.g James Peter and John? Thanks
Hi, I`m following this article in an attempt to ingest Teams data into Splunk and I need some help with testing the webhook - can someone confirm what the webhook URL is ?         curl WE... See more...
Hi, I`m following this article in an attempt to ingest Teams data into Splunk and I need some help with testing the webhook - can someone confirm what the webhook URL is ?         curl WEBHOOK_ADDRESS -d '{"value": "test"}'           Also, looking at the documentation for the Teams Add-on for Splunk it states that "theTeams Webhook is not available for Splunk Cloud installations." - has anyone found an alternative solution for Cloud Deployments ? We use Splunk in a hybrid (cloud + on prem) environment. Many thanks.
Hi I'm trying to extract some json values into tables for a dashboard. The log line that i'm using is something like the below       username=myUser notificationPreferences= [class Notific... See more...
Hi I'm trying to extract some json values into tables for a dashboard. The log line that i'm using is something like the below       username=myUser notificationPreferences= [class NotificationPreferences { category=cat1, categoryDescription=category1 receiveEmailNotifications=false receiveSmsNotifications=false }, class NotificationPreferences { category=cat2 categoryDescription=category2 receiveEmailNotifications=false receiveSmsNotifications=true }]         As you can see, its just a standard toString on a java class that the developers are outputting. What i want is a table of users and categories, with each category having the associated details, eg User Category Email SMS myUser1 Category1 false false myUser1 Category2 false true myUser2 Category1 true true   I started by trying to tidy up the json        | rex field=notificationPreferences mode=sed "s/\[class NotificationPreferences/prefs:[ /g" | rex field=notificationPreferences mode=sed "s/, class NotificationPreferences/, /g"       Which makes the notificationPreferences field a bit better       username=myUser notificationPreferences= prefs:[ { category=cat1, categoryDescription=category1 receiveEmailNotifications=false receiveSmsNotifications=false },{ category=cat2 categoryDescription=category2 receiveEmailNotifications=false receiveSmsNotifications=true }]       But from here im struggling with what i need to do in terms of spath and extractions to get both categories to work. I only ever seem to get the first category to appear in my results. Any help would be great Thanks  
Dears, I have installed Splunk app for linux  & add on in my Splunk enterprise paid license version. Installed splunk forwarder in all hosts & added cpu, vmstat & df in input.conf file in remote se... See more...
Dears, I have installed Splunk app for linux  & add on in my Splunk enterprise paid license version. Installed splunk forwarder in all hosts & added cpu, vmstat & df in input.conf file in remote servers. Now i want to create dashboard for live monitoring for mentioned linux metrics  & alerts for that. Need to help to do that or have any good documents please share.
Hi Team, We have a field called Status=Start and Status=Success OrderId is one field When orderId has the Status=start and if there is no Status=Success for 10 mins it should be considered as fa... See more...
Hi Team, We have a field called Status=Start and Status=Success OrderId is one field When orderId has the Status=start and if there is no Status=Success for 10 mins it should be considered as failure May i know how to write a condition for this?
I have the following data that I'm trying to timechart the differences between: 2023-02-16T16:14:04: Data Processing Phase -1 completed 2023-02-16T14:01:00: Data Processing Phase -1 starting 2023... See more...
I have the following data that I'm trying to timechart the differences between: 2023-02-16T16:14:04: Data Processing Phase -1 completed 2023-02-16T14:01:00: Data Processing Phase -1 starting 2023-02-16T14:01:00: Data Collection Phase 3 (Final Collection Phase) completed 2023-02-16T11:34:10: Data Collection Phase 2 starting 2023-02-16T11:34:10: Data Collection Phase 1 completed 2023-02-16T11:34:10: Data Collection Phase 3 (Final Collection Phase) starting 2023-02-16T11:34:10: Data Collection Phase 2 completed 2023-02-16T09:01:36: Data Collection Phase 1 starting   I've sliced up the data using the following SPL, but that will only give me a look at the time differences over the selected timeline.  I can't figure out how to slice this data up so that I'm able to timechart the differences over multiple runs of the Data Collection Phases. | stats first(_time) as End, last(_time) as Start by Phase, PhaseIdentifier | eval RunTime = round((End - Start) / 60, 0) | eval Start=strftime(Start, "%c") | eval End=strftime(End, "%c") | rename RunTime AS "RunTime (Minutes)"   I'm used to working more with metrics and logs that spit out runtimes, so this has been vexing me for entirely too long...
Hi, need some help in crafting a search query that could get count by a regex and display counts in a table.   The log msg we have is "Successfully submitted: admin-mobile" or "Successfully submi... See more...
Hi, need some help in crafting a search query that could get count by a regex and display counts in a table.   The log msg we have is "Successfully submitted: admin-mobile" or "Successfully submitted: admin". I'd like to count numbers of msg contains "admin-mobile" and "admin" respectively and show them in a table.   I know that I can get one count by: `| search "Successfully submitted: admin-mobile" | stats count` and it will show in a table.  Question is how to get the other count. Thanks.   The result i'd like to have is like below, in a table format: submissionType        count    admin-mobile              999 admin                              888
There is a bug in the Job "Share" button: It only works for admins! Analysts have mentioned that within the search head, they are unable to share jobs with one other. This means that while anyone ... See more...
There is a bug in the Job "Share" button: It only works for admins! Analysts have mentioned that within the search head, they are unable to share jobs with one other. This means that while anyone is investigating alerts, reviewing things, etc., he must share the actual search SPL query and whomever he is working with has to re-run that search again. I know that the default "admin" role allows any user to view the private jobs of others but of course we don't want to give the "admin" role to all of the analysts. I reviewed all of the Splunk "capabilities" and could not find one that would be related to this permission. I am going to open a case, but I am looking for a quicker work-around solution.
Very strange scenario. I'll use a rex statement to retrieve data and it works perfectly. If I copy and paste the rex command that Splunk used (Copied from Job Inspector) it does not work. I'll receiv... See more...
Very strange scenario. I'll use a rex statement to retrieve data and it works perfectly. If I copy and paste the rex command that Splunk used (Copied from Job Inspector) it does not work. I'll receive an error. An actual snippet of raw data that I've used as an example in my erex statement. The data in bold is what went into my example. "usbProtocol":1,"deviceName":"Canon Digital Camera","vendorName":"Canon Inc.", And the job inspector spat out the following: | rex "(?i)\"deviceName\\\":\\\"(?P<Device>[^\\]+)" And the data looked perfect, like so; Canon Digital Camera   But if I use that rex statement spat out by the Job Inspector in my search Splunk says nay nay; The error in Splunk received was "Error in 'rex' command: Encountered the following error while compiling the regex '(?i)"deviceName\":\"(?P<Device>[^\]+)': Regex: missing terminating ] for character class."   I reached out to a coworker that provided | rex ".*deviceName(?<Model>.*?)," And it works to a degree, but includes characters that I'd rather not see in my data. Actual example of what is spat out; \":\"Canon Digital Camera\" Just also mentioning this in case it matters - where there is no data available/null within the "deviceName" raw data, it will show like this; \":\"\" I'd really appreciate some guidance with my regex code. I've been delving into this lately, used many training materials, but can't seem to figure this one out?!
Hi,  Dashboard help is needed. I am attaching pictures. Please provide SPL.  I need to connect dynamically an item selected from the "Year OA Motives" drop-down to the tabs/buttons below (SPL n... See more...
Hi,  Dashboard help is needed. I am attaching pictures. Please provide SPL.  I need to connect dynamically an item selected from the "Year OA Motives" drop-down to the tabs/buttons below (SPL needed).  By Clicking each tab will open up a table containing the corresponding data in a panel below the tabs (SPL neeeded). Regards, Selina.
I have a dashboard created with network data already in that dashboard.  I am trying to create a search dropdown or search bar so I can search for a specific IP and it will show all the data for that... See more...
I have a dashboard created with network data already in that dashboard.  I am trying to create a search dropdown or search bar so I can search for a specific IP and it will show all the data for that IP in my dashboard.
There a about 3 ways to set up outputs.conf and  when you trying to setup forwarders.  you can either do a cli entry to add a forwarder server(and indexer) or you cand edit outputs .conf files We... See more...
There a about 3 ways to set up outputs.conf and  when you trying to setup forwarders.  you can either do a cli entry to add a forwarder server(and indexer) or you cand edit outputs .conf files We made the outputs .conf according to a tutorial we saw but were have issues getting data in. So the question is what is broken about our outputs.conf file  (also side note originallt the.102 address wasnt in the files and neither was default-autolb group) thanks    
Hi, I have a search where I am attempting to extracting 2 different fields from one string response using "rex":     1st Field: rex \"traceId\"\s:\s\"?(?<traceId>.*?)\" 2nd Field: rex "\"sta... See more...
Hi, I have a search where I am attempting to extracting 2 different fields from one string response using "rex":     1st Field: rex \"traceId\"\s:\s\"?(?<traceId>.*?)\" 2nd Field: rex "\"statusCode\"\s:\s\"?(?&lt;tstatusCode&gt;2\d{2}|4\d{2}|5\d{2})\"?"     I am attempting to "dedup" the 1st field (traceId) before I pipe those results into the 2nd field (statusCode).  I have attempted multiple variation based on Splunk threads and other internet resources.  Below is the query I am making:     index=myCoolIndex cluster_name="myCoolCluster" sourcetype=myCoolSourceType label_app=myCoolApp ("\"statusCode\"") | rex \"traceId\"\s:\s\"?(?<traceId>.*?)\" | dedup traceId | rex "\"statusCode\"\s:\s\"?(?&lt;tstatusCode&gt;2\d{2}|4\d{2}|5\d{2})\"?" //I have tried a lot of other permutations this is just one     Below is the response from the log (looks like JSON but it is string type):     \\Sample Log (Looks like JSON object, but its a string): "{ "correlationId" : "", "message" : "", "tracePoint" : "", "priority" : "", "category" : "", "elapsed" : 0, "locationInfo" : { "lineInFile" : "", "component" : "", "fileName" : "", "rootContainer" : "" }, "timestamp" : "", "content" : { "message" : "", "originalError" : { "statusCode" : "200", "errorPayload" : { "error" : "" } }, "standardizedError" : { "statusCode" : "500", "errorPayload" : { "errors" : [ { "error" : { "traceId" : "9539510-d8771da0-a7ce-11ed-921c-d6a73926c0ac", "errorCode" : "", "errorDescription" : "" "errorDetails" : "" } } ] } } }, }"     The intent of the query is to: Extract field "traceId", then "dedup" "traceId" (to remove duplicates), then extract field "statusCode" and sort "statusCode" values. When running these regEx's independently of eachother they work as expected, but I need to combine them into one query as I will be creating charts on my next step.....  All help is appreciated.  
Dear Splunkers, I just set up a little testing environment, with Splunk Enterprise running smoothly on a Debian server and a Universal Forwarder running on a Windows 10 machine. My goal is to send ... See more...
Dear Splunkers, I just set up a little testing environment, with Splunk Enterprise running smoothly on a Debian server and a Universal Forwarder running on a Windows 10 machine. My goal is to send some sysmon logs into Splunk. I first started to follow this page from the official Splunk documentation : https://docs.splunk.com/Documentation/AddOns/released/MSSysmon/Install But unfortunately, this page does not explain how to install the client side of the app (is there a client side anyway ? Well nothing is explained about it). What I did is that I set my inputs.conf file in etc/system/local on the client machine. It partially worked, as the data is being sent to the Splunk server, but none of the fancy dashboards and graphs that the Sysmon for Splunk add-on or app is supposed to display is available. I know I must have missed something on the client part, but I also have to mention here that the kafkaesque intricated tons-of-links Splunk documentation does not help me much, to say the least. It would be extremely nice if someone could turn on the light because so far, my Splunk journey is being black as midnight in a moonless night. Thanks a lot Gargantua
Hello group, I am trying to use DBConnect to connect to a hiver server required a kerberos authentification. jdbc:hive2://xxx2.company.com:2181,xxx1.company.com:2181,xxx3.company.com:2181/default... See more...
Hello group, I am trying to use DBConnect to connect to a hiver server required a kerberos authentification. jdbc:hive2://xxx2.company.com:2181,xxx1.company.com:2181,xxx3.company.com:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_HOST@ADDEV.COMPANY.COM I have the key tab file somewhere. Which config file I need to modify?
On Splunk 9.0.0 on windows on one of our dedicated Deployment servers when we go to Settings \ Forwarder Management in the Web UI we get a nice list of Clients aka systems running the Universal Forwa... See more...
On Splunk 9.0.0 on windows on one of our dedicated Deployment servers when we go to Settings \ Forwarder Management in the Web UI we get a nice list of Clients aka systems running the Universal Forwarder so far so good My question is how can I see what Search is generating this result ie this list of machines? unlike some of the other screens there is no magnifying glass icon to click on to invoke the search pane