All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have json in following format.   { "timestamp": "1625577829075", "debug": "true", "A_real": { "Sig1": { "A01": "Pass", "A02": "FAIL", "A03": "FAIL", "A04": "FAIL... See more...
I have json in following format.   { "timestamp": "1625577829075", "debug": "true", "A_real": { "Sig1": { "A01": "Pass", "A02": "FAIL", "A03": "FAIL", "A04": "FAIL", "A05": "Pass", "finalEntry": "true" }, "Sig2": { "A01": "Pass", "A02": "FAIL", "A03": "FAIL", "A04": "Pass", "A05": "FAIL", "finalEntry": "true" }, "finalEntry": "true" } }   and one csv file as following: Id Timestamp A02 T1 A03 T2 A05 T3 I want to create a saved search using outer join on Id and transpose which gives me result as following: Id             Sig1            Sig2 A02         Fail             Fail A03         Fail              Fail A05         Pass          Fail   Please sugget query.
Hello Lovely people   I have a field that contains values contatenated by the "." character and the values of this fields may be something like this: uhss.didhikd.8979.ODJD.73HJ.Uber.39383.7854 ... See more...
Hello Lovely people   I have a field that contains values contatenated by the "." character and the values of this fields may be something like this: uhss.didhikd.8979.ODJD.73HJ.Uber.39383.7854 dhikd.8979.ODUber.JD.73HJ.39383.7854 undñ_opl.Uber.iolddld ddidjd_iddd_lioft_yes What I want is to detect is if the string has the characters ".Uber" that means a "." next to "Uber" if that is true I want the variableRIDE to be 1 if not I want that variable to be 0, I would really enjoy your help guys thank you so much.. so for the last example: FIELD RIDE uhss.didhikd.8979.ODJD.73HJ.Uber.39383.7854 1 dhikd.8979.ODUber.JD.73HJ.39383.7854 1 undñ_opl.Uber.iolddld 1 ddidjd_iddd_lioft_yes 0   Thank you so much guys!
Hi, How  I would write TIME_PREFIX and TIME_FORMAT for props configuration file for the following events (4- sample events given below). Any help will be highly appreciated. Thank you!   [Tue Jun ... See more...
Hi, How  I would write TIME_PREFIX and TIME_FORMAT for props configuration file for the following events (4- sample events given below). Any help will be highly appreciated. Thank you!   [Tue Jun 15 00:00:26.337 EDT 2021] [CommonPool:6554] Process ID             = 744021 [Tue Jun 15 00:00:26.337 EDT 2021] [CommonPool:6554]  [Tue Jun 15 00:00:26.337 EDT 2021] [CommonPool:6554] Realm Server Details : XXX [Tue Jun 15 00:00:26.337 EDT 2021] [CommonPool:6554]   Product              = Universal Messaging    
I am trying to pull information from AppD using selective CURL information for audit purposes (versus pulling a huge Dexter report).  I have referenced the API docs on policy details (Policy API (app... See more...
I am trying to pull information from AppD using selective CURL information for audit purposes (versus pulling a huge Dexter report).  I have referenced the API docs on policy details (Policy API (appdynamics.com) however the information I am after is specifically what health rules are in scope for the particular policy.  As part of the audit, we are checking to make sure a policy is in place and enabled but policies can cover multiple health rules so need a way to get that detail.  Any assistance would be greatly appreciated:
1. There will be 2 separate charts: CPU usage by process, and RAM usage by process. 2. Sometimes more than one instance of a process is running. For example: there can be 2 splunkd processes, one us... See more...
1. There will be 2 separate charts: CPU usage by process, and RAM usage by process. 2. Sometimes more than one instance of a process is running. For example: there can be 2 splunkd processes, one using 170M and the other using 65M; in the chart I'd like this represented as 1 splunkd and the total of 235M between the 2 splunkd processes. 3. I'd like an overlay, an additional line on the timechart that shows the total RAM/CPU consumed on the server itself. See below screenshots of the search I have constructed so far, and the printout of top on the server to demonstrate the presence of several processes by the same name, that I'd like to aggregate in the timechart's results.
Somobody has experience with filtering (supressing) Windows event using XML in Splunk inputs.conf? So I have XML to filter specific Events from logs. I can't find in documentation stanza for adding ... See more...
Somobody has experience with filtering (supressing) Windows event using XML in Splunk inputs.conf? So I have XML to filter specific Events from logs. I can't find in documentation stanza for adding XML in inputs.conf Is it possible at all or only blacklisting? Thank you
Anybody has experience with adding custom logs from Event Viewer to inputs.conf? Is it enogh to put stanza: [WinEventLogs://name of custom event logs same as in Event Viewer] or something else? Th... See more...
Anybody has experience with adding custom logs from Event Viewer to inputs.conf? Is it enogh to put stanza: [WinEventLogs://name of custom event logs same as in Event Viewer] or something else? Thank you  
Hi Ryan, is it true that  An application agent node cannot belong to more than one tier We have same server(node) reporting to different tiers also, around 4 nodes are mapped to different ... See more...
Hi Ryan, is it true that  An application agent node cannot belong to more than one tier We have same server(node) reporting to different tiers also, around 4 nodes are mapped to different Tier and each node is contributing to the calls in different BT, hence even if have only 37 agents we see around 100+ app agent servers in the AppDynamics agents window from settings, so we are confused a bit how this is happening. Can you explain a bit more
I need help to write time format and time prefix for below  timelogs. Please note these are seperate logs, hence need different timeformat and timeprefix for all three. Help will be appreciated, Than... See more...
I need help to write time format and time prefix for below  timelogs. Please note these are seperate logs, hence need different timeformat and timeprefix for all three. Help will be appreciated, Thanks in advance! ####<30/06/2021 11:13:08,975 PM AEST> ####<Jul 3, 2021 4:25:41,233 PM AEST> [2021-07-06T23:59:58.849+10:00]   Trying to get this added in props.conf file in below format, need assistance with timeformat and timeprefix DATETIME_CONFIG = NO_BINARY_CHECK = true TZ = Australia/Sydney TIME_FORMAT =  TIME_PREFIX =
I was just wondering if there is any plan to have AppDynamics collect the metrics from DAPR (Dapr.IO).  Thanks
Hi i have a json data which i am working on and i used fieldsummary to get data similar to below image. sample example: suppose i have my result like this I want to get count of value "Denver"... See more...
Hi i have a json data which i am working on and i used fieldsummary to get data similar to below image. sample example: suppose i have my result like this I want to get count of value "Denver" in the field values from the above image. I tried spath but it's not working. output  should be  like:   value Count Denver 1     Any help is appreciated. thanks.
How to find a server class a host belongs to?
I want all syslog data to come in as a general sourcetype. If it matches a transforms, it should be changed. Splunk is on 8.0.2.1. Config files are modified by external script. Confirmed via GUI and... See more...
I want all syslog data to come in as a general sourcetype. If it matches a transforms, it should be changed. Splunk is on 8.0.2.1. Config files are modified by external script. Confirmed via GUI and splunk cmd btool props list SyslogServer --debug and splunk cmd btool transforms list set_sourcetype_UPS:TrippLite --debug that Splunk is seeing my config. Even after a Splunk restart, the sourcetype is still SyslogServer.  transforms.conf     [set_sourcetype_UPS:TrippLite] REGEX = 192\.168\.0\.100|192\.168\.1\.100|192\.168\.2\.100 FORMAT = sourcetype::UPS:TrippLite SOURCE_KEY = src_ip DEST_KEY = MetaData:Sourcetype     props.conf     [SyslogServer] CHARSET = UTF-8 DATETIME_CONFIG = FIELD_DELIMITER = | HEADER_FIELD_LINE_NUMBER = timeWritten,src_ip,facility,severity,timeGenerated,msg_tag,msg_origin,msg INDEXED_EXTRACTIONS = csv LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = category = Custom pulldown_type = 1 disabled = false REPORT-SyslogServer1 = REPORT-SyslogServer1 TRANSFORMS-changesourcetype = set_sourcetype_UPS:AmericanPowerConversionCorp.,set_sourcetype_UPS:TrippLite     inputs.conf (on the syslog server)     [monitor://C:\ProgramData\SyslogServer] disabled = false # whitelist = *.csv recursive = true index = syslog sourcetype = SyslogServer      
I'm new to this, and would appreciate any help from someone who uses NodeJs with Splunk. I can successfully query past search jobs and details for a specific search job given the SID, but I can't cre... See more...
I'm new to this, and would appreciate any help from someone who uses NodeJs with Splunk. I can successfully query past search jobs and details for a specific search job given the SID, but I can't create a search job or get a session key through the login link. Below is the code I have working so far, and after that I will show the code that is not working: const url = 'https://mydomain:8089/services/search/jobs' const auth = {       username: 'myusername',       password: 'mypassword', } axios.get(url, {auth}) .then((response) => {       // do something with the response ... }) .catch((error) => {       // handle error ... }) However, when I try to create a search job in a similar way, it does not work, and when I try to get a session key, it does not work. I will show both codes snippets. The following gives me a 401 Unauthorized: const search = 'search mysearch' const params = {search} axios.post(url, {auth, params}) .then((response) =>  {       // do something with response ... }) .catch((error) => {       // handle error ... }) When I try to first get a session key, however, it gives me a 400 Bad Request: const loginurl = 'https://mydomain:8089/services/auth/login' axios.post(loginurl, {auth}) .then((response) => {       // do something with response ... }) .catch((error) => {       // handle error ... }) I've been banging my head against the wall with this. Any help is greatly appreciated!
The add-on fails to line break JSON docs into separate events/logs when pulling from an event hub. Certain Azure services seem to write multiple JSON docs to a single event hub message. Is there an... See more...
The add-on fails to line break JSON docs into separate events/logs when pulling from an event hub. Certain Azure services seem to write multiple JSON docs to a single event hub message. Is there an option to correct this parsing? {"body":{"records": {"DataCenterName": "East US 2", "DeploymentUnit": "xyz", "EventId": 160, "EventName": "AzureBackupCentralReport", "properties": {"VaultUniqueId": "......... {"body":{"records": {"DataCenterName": "East US 2", "DeploymentUnit": "xyz", "EventId": 160, "EventName": "AzureBackupCentralReport", "properties": {"VaultUniqueId": "......... {"body":{"records": {"DataCenterName": "East US 2", "DeploymentUnit": "xyz", "EventId": 160, "EventName": "AzureBackupCentralReport", "properties": {"VaultUniqueId": "......... {"body":{"records": {"DataCenterName": "East US 2", "DeploymentUnit": "xyz", "EventId": 160, "EventName": "AzureBackupCentralReport", "properties": {"VaultUniqueId": "......... {"body":{"records": {"DataCenterName": "East US 2", "DeploymentUnit": "xyz", "EventId": 160, "EventName": "AzureBackupCentralReport", "properties": {"VaultUniqueId": "......... {"body":{"records": {"DataCenterName": "East US 2", "DeploymentUnit": "xyz", "EventId": 160, "EventName": "AzureBackupCentralReport", "properties": {"VaultUniqueId": "......... {"body":{"records": {"DataCenterName": "East US 2", "DeploymentUnit": "xyz", "EventId": 160, "EventName": "AzureBackupCentralReport", "properties": {"VaultUniqueId": "......... {"body":{"records": {"DataCenterName": "East US 2", "DeploymentUnit": "xyz", "EventId": 160, "EventName": "AzureBackupCentralReport", "properties": {"VaultUniqueId": "......... {"body":{"records": {"DataCenterName": "East US 2", "DeploymentUnit": "xyz", "EventId": 160, "EventName": "AzureBackupCentralReport", "properties": {"VaultUniqueId": "......... {"body":{"records": {"DataCenterName": "East US 2", "DeploymentUnit": "xyz", "EventId": 160, "EventName": "AzureBackupCentralReport", "properties": {"VaultUniqueId": "......... {"body":{"records": {"DataCenterName": "East US 2", "DeploymentUnit": "xyz", "EventId": 160, "EventName": "AzureBackupCentralReport", "properties": {"VaultUniqueId": ".........  
I have a Splunk Enterprise instance with a 1GB license set up to aggregate logs in a small Windows AD environment (Server 2016 DC, CentOS file server, and < 10 Win10 workstations). I currently have t... See more...
I have a Splunk Enterprise instance with a 1GB license set up to aggregate logs in a small Windows AD environment (Server 2016 DC, CentOS file server, and < 10 Win10 workstations). I currently have the DC, file server, and 3 workstations deployed. I keep getting license usage warnings. Upon investigation, the CentOS server where the Splunk server is installed is by far the largest license user (on average 200% usage). Furthermore, my linux_audit sourcetype is the main source of the usage. That sourcetype only monitors /var/log/audit/audit.log. On disk, /var/log/audit/audit.log is only 74MB, so I have no idea why I am using 2GB+ of license every single day! Can anyone help?
Hi Guys,   I would ask how to add a link on the next steps form. on the correlation search I read: "Add a link to an action with the syntax: [[action|nameOfAction]]." but is not clear. Re... See more...
Hi Guys,   I would ask how to add a link on the next steps form. on the correlation search I read: "Add a link to an action with the syntax: [[action|nameOfAction]]." but is not clear. Regards Ale
Hello, I figured out that beginning with Splunk 8.1 the app name/label is not shown in the navigation bar after switched to this app. E.g. there is just "App", before it was "App: Search & Reporting... See more...
Hello, I figured out that beginning with Splunk 8.1 the app name/label is not shown in the navigation bar after switched to this app. E.g. there is just "App", before it was "App: Search & Reporting".   I saw this on several installations. Just the db-connect app shows the name like before.   Does anyone have an idea? I did not find anything at the known issues.   Regards, Andreas
So we just updated to 8.2.1 and we are now getting an Ingestion Latency error… How do we correct it? Here is what the link says and then we have an option to view the last 50 messages...  Ingesti... See more...
So we just updated to 8.2.1 and we are now getting an Ingestion Latency error… How do we correct it? Here is what the link says and then we have an option to view the last 50 messages...  Ingestion Latency Root Cause(s): Events from tracker.log have not been seen for the last 6529 seconds, which is more than the red threshold (210 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked. Events from tracker.log are delayed for 9658 seconds, which is more than the red threshold (180 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked. Generate Diag?If filing a support case, click here to generate a diag. Here are some examples of what is shown as the messages: 07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\spool\splunk. 07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\run\splunk\search_telemetry. 07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\log\watchdog. 07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\log\splunk. 07-01-2021 09:28:52.276 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\var\log\introspection. 07-01-2021 09:28:52.275 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\Splunk\etc\splunk.version. 07-01-2021 09:28:52.269 -0500 INFO TailingProcessor [66180 MainTailingThread] - Adding watch on path: C:\Program Files\CrushFTP9\CrushFTP.log. 07-01-2021 09:28:52.268 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\watchdog\watchdog.log*. 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\splunk\splunk_instrumentation_cloud.log*. 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\splunk\license_usage_summary.log. 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\splunk. 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\introspection. 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: monitor://$SPLUNK_HOME\etc\splunk.version. 07-01-2021 09:28:52.267 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk\tracker.log*. 07-01-2021 09:28:52.266 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk\...stash_new. 07-01-2021 09:28:52.266 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk\...stash_hec. 07-01-2021 09:28:52.266 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk. 07-01-2021 09:28:52.265 -0500 INFO TailingProcessor [66180 MainTailingThread] - Parsing configuration stanza: batch://$SPLUNK_HOME\var\run\splunk\search_telemetry\*search_telemetry.json. 07-01-2021 09:28:52.265 -0500 INFO TailingProcessor [66180 MainTailingThread] - TailWatcher initializing...
This addon has two inputs that no longer seem to be valid arguments for splencore.sh, "clean", and "status".  I've disabled these inputs for now. I believe these used to work in 3.8.x but have recent... See more...
This addon has two inputs that no longer seem to be valid arguments for splencore.sh, "clean", and "status".  I've disabled these inputs for now. I believe these used to work in 3.8.x but have recently stopped working in the 4.6x versions. When running these commands manually they're not recognized: [splunk@splunk bin]$ ./splencore.sh clean Usage: {start | stop | restart | foreground | test | setup} start: starts eNcore as a background task stop: stop the eNcore background task restart: stop the eNcore background task foreground: runs eNcore in the foreground test: runs a quick test to check connectivity setup: change the output (splunk | cef | json) clean [splunk@splunk bin]$ ./splencore.sh status Usage: {start | stop | restart | foreground | test | setup} start: starts eNcore as a background task stop: stop the eNcore background task restart: stop the eNcore background task foreground: runs eNcore in the foreground test: runs a quick test to check connectivity setup: change the output (splunk | cef | json) status