All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I want to monitor a whole bunch of Universal Forwarders that i have set up and configured. All data from these are all forwarded to a heavy forwarder that forwards everything to Splunk Cloud. ... See more...
Hi, I want to monitor a whole bunch of Universal Forwarders that i have set up and configured. All data from these are all forwarded to a heavy forwarder that forwards everything to Splunk Cloud. My problem is that i have only access to one index in the cloud, but not the _internal index that receives UF metrics. Is it possible to change the index from _internal to the one I have access to in the UF config?
Hi fellow Alert Manager Users,   is it possible to create alert manager incidents from SPL instead of from the custom alert action? This would allow for easier testing, being able to manually creat... See more...
Hi fellow Alert Manager Users,   is it possible to create alert manager incidents from SPL instead of from the custom alert action? This would allow for easier testing, being able to manually create incidents, not needing to schedule a search. I tried the following, not diving into the code so far, but this did not work out:   | makeresults count=1 | eval index="alerts", auto_assign_owner="unassigned", impact="low", urgency="low", title="testTitle", owner="unassigned", append_incident=1, auto_previous_resolve=0, auto_subsequent_resolve=0, auto_suppress_resolve=0, auto_ttl_resove=0, display_fields="test", category="testCategory", subcategory="testSubcategory", tags="testTag", notification_scheme="testScheme" | sendalert alert_manager param.index="alerts" param.auto_assign_owner="unassigned" param.impact="low" param.urgency="low" param.title="testTitle" param.owner="unassigned" param.append_incident=1 param.auto_previous_resolve=0 param.auto_subsequent_resolve=0 param.auto_suppress_resolve=0 param.auto_ttl_resove=0 param.display_fields="test" param.category="testCategory" param.subcategory="testSubcategory" param.tags="testTag" param.notification_scheme="testScheme"   It fails with the error:   File "/opt/splunk/etc/apps/alert_manager/bin/alert_manager.py", line 570, in <module> log.debug("Parsed savedsearch settings: expiry={} digest_mode={}".format(savedSearch['content']['alert.expires'], savedSearch['content']['alert.digest_mode'])) KeyError: 'content'     Thanks!
Hello, I am posting here to know if anyone of you have an idea about the queries i have to search in order to save them and create a single dashboard to monitor my forwarders. I need queries to :  ... See more...
Hello, I am posting here to know if anyone of you have an idea about the queries i have to search in order to save them and create a single dashboard to monitor my forwarders. I need queries to :  -  show the maximum CPU usage (in percent) per machine monitored, and the maximum CPU usage (in percent) of all these machines - another one exactly as the previous one, but for the average CPU usage (in percentage) -A third one with the same concept, but for RAM instead of CPU (always in percentage) - Same thing, with disk usage (in percentage) -2 other ones for the inbound and outbound network trafic (in percentage with unit : 1Gbps)   The data are collected from the monitored machines via the plugin 'Splunk add-on for Unix and Linux', and stored in an index called Linux Thank you !
Hi fellow Alert Manager Users, what is a good way to clear out the alert manager incidents, to restart fresh? I am creating new tickets and still testing their content. After having finalized the ti... See more...
Hi fellow Alert Manager Users, what is a good way to clear out the alert manager incidents, to restart fresh? I am creating new tickets and still testing their content. After having finalized the ticket content, I would like to start out fresh again. Seeing that alert manager keeps incidents, their details and change history separately, also in index, kv and other places, they question is what should be cleared to get a clean new start. Thanks!
I know this topic has been discussed many times in this thread, but I have not found a case like mine so far. The index changes the day by the month and the month by the day from the 1st of each mon... See more...
I know this topic has been discussed many times in this thread, but I have not found a case like mine so far. The index changes the day by the month and the month by the day from the 1st of each month until %d=%m. From 12/12 (for example) the data will be stored correctly in December. The data I have in the log looks like this:     01/12/2021 12:10:04, ......     And the configuration I have in props.conf is as follows:     [source:://not/able/to/show/real/path/license_*.txt] TIME_FORMAT=%d/%m/%Y %H:%M:%S TIMESTAMP_FIELDS=Date     I have tried to analyze which props are taken into account with the command     splunk cmd btool props list --debug     The properties seem to be taken into account. In my case a TIME_PREFIX is not applicable either because there are no spaces or symbols at the beginning, I have tried everything. Any suggestions? I ran out of ideas
I have sourcetype A that has info about service_accounts such as name, AU, email , full_name, manager_name. But some of the events in source A, do not contain the field  email , manager_name, full_n... See more...
I have sourcetype A that has info about service_accounts such as name, AU, email , full_name, manager_name. But some of the events in source A, do not contain the field  email , manager_name, full_name field. In those cases I have to look into another index and sourcetype, say B to fetch those data. AU is the common field name in both . Can we join the data, without having to use 'join' for performance issue ?  
I have data coming in where I have a field called Result which holds data as below 1) "FAIL" 2) " FAIL " 3) "PASS" 4) " PASS " now i have created a dashboard where the Result field is used in Dr... See more...
I have data coming in where I have a field called Result which holds data as below 1) "FAIL" 2) " FAIL " 3) "PASS" 4) " PASS " now i have created a dashboard where the Result field is used in Drop down box. I have cleared the extra space from the field using | rex mode=sed field=Result "s/ //g" | I also have a table showing data of values of PASS and FAIL count where i have also cleared the space using rex command and created fields of Results using  | stats count(eval(searchmatch("PASS"))) AS PASS count(eval(searchmatch("FAIL"))) AS FAIL | Now when i filter "FAIL" or "PASS" in drop down and submit, The table on dashboard does not show count for values having space (i.e. " PASS " and " FAIL ") and shows count for only without space values. how can i solve this.  
I have data coming in where I have a field called Result which holds data as below 1) "FAIL" 2) " FAIL " 3) "PASS" 4) " PASS " now i have created a dashboard where the Result field is used in Dr... See more...
I have data coming in where I have a field called Result which holds data as below 1) "FAIL" 2) " FAIL " 3) "PASS" 4) " PASS " now i have created a dashboard where the Result field is used in Drop down box. I have cleared the extra space from the field using | rex mode = sed field=Result "s/ //g"|  in dropdown values. I have a data also showing on dashboard using the count as  stats count(eval(searchmatch("PASS"))) AS PASS count(eval(searchmatch("FAIL"))) AS FAIL which also have cleared the space using  | rex mode =  sed field=Result "s/ //g"| but when I select "PASS" or "FAIL" in drop down and submit the data on dashboard, it excludes the data which has values with space in it (i.e. " FAIL " and " PASS ") and shows only the values without space. How can I solve this.  
I've been trying to create a Cluster Map to track the location of a vehicle into my dashboard. Although I can see the location of the vehicle in a cluster map by using search, I am not able to create... See more...
I've been trying to create a Cluster Map to track the location of a vehicle into my dashboard. Although I can see the location of the vehicle in a cluster map by using search, I am not able to create a cluster map from dashboard as I cannot see a feature for creating a cluster map. I can only see "world map", "US map", "choropleth svg"  features and I copied my command  that I used in the search part to track the location  and pasted into spl code sections of all these map features presented by dashboard studio but it did not work. Could you please guide me on creating a cluster map in the dashboard studio? If not, could you please provide me an other way in deep detailed to track the location ? I appreciated for your time.
Hi, I'm collecting syslog events from network to a dedicated universal forwarder using a TCP input on forwarder.  In my Splunk installation I get all the syslog entries, but there's a number in ang... See more...
Hi, I'm collecting syslog events from network to a dedicated universal forwarder using a TCP input on forwarder.  In my Splunk installation I get all the syslog entries, but there's a number in angled brackets (<149>, for example) added to the beginning of every log entry added to Splunk index. That number is not always <149>, it changes, but I cannot find the logic behind those changes. That angled bracketed number does not allow to implement correct field extraction. So my question is: how do I get rid of that number in angled brackets? Shall it be done on forwarder?  I'm sorry if my question is stupid, or is well-covered in documentation, I'm relatively new to Splunk and learning now.   Thank you!
I have indexed a file on Splunk but when I start searching, the file cannot be found. Do you know why it happened? For my last problem, it has been solved. Currently I am having a problem where the i... See more...
I have indexed a file on Splunk but when I start searching, the file cannot be found. Do you know why it happened? For my last problem, it has been solved. Currently I am having a problem where the indexed file cannot be found. As you can see, I have indexed the /home/csaops/csasec/*.txt       But when I searched for it, no result     Do you guys know why this happened? It is not a binary data. It is a health check data
I have an issue to remove the double quotes from the middle of a string. Example below  "My Name "is Ethan". Here i want to retain the quotes in the front and back , but remove the one from middle.... See more...
I have an issue to remove the double quotes from the middle of a string. Example below  "My Name "is Ethan". Here i want to retain the quotes in the front and back , but remove the one from middle. I used the SED CMD as SEDCMD-removeDoubleQuotes= s/\s"//g This perfectly worked as i got the output as expected below  "My Name is Ethan" But the problem is , sometime the last double quotes comes with a space before it . Because of that, the above SEDCMD is replacing the double quotes from the last of the string too . "My Name "is Ethan " The result is  "My Name is Ethan  Can someone help me how to handle this and specifically remove the middle double quotes ? Help is appreciated.  
We recently upgraded our Splunk from 8.1.4 to 8.2.2.2. After the upgrade the dashboard studio is working fine in Google Chrome but we see a blank page while trying to open the same in Firefox
I have created a saves search and it runs every day. I then created a report that uses this saved search. All I am doing in report is calling saved search like  this.. | savedsearch mysavedsearchnam... See more...
I have created a saves search and it runs every day. I then created a report that uses this saved search. All I am doing in report is calling saved search like  this.. | savedsearch mysavedsearchname   The problem is that when I run this report, it looks like its running the query behind the SavedSearch. I was hoping that instead of running the query, it will shows the last run results form saved search.  If this is by design then how can I get the last run results without running again. And I know that I can easily push the saved search result to csv file and then call csv in report. But I don't want to do this. 
Hey folks, I am trying to pull a result based on chart count by, I am also not sure if there is any other command which can fulfil this result. So the end result what I am looking for is : http.... See more...
Hey folks, I am trying to pull a result based on chart count by, I am also not sure if there is any other command which can fulfil this result. So the end result what I am looking for is : http.status IN (200,400,403) | chart count by path http.status path.                                    200.                      400                   403 /abc                                     10%                     30%                  60% /xyz                                      20%                    40%                  40% /home                                 35%                    35%                   30% I have checked the community answers but none of them is close to what I am looking for. if someone could just guide and help me through this, that would be really helpful.
Hi, I am using Distributed Splunk Enterprise Deployment (at Phantom end) to ingest phantom logs into splunk. CORE SIT Search Head IP is used here and it is working fine. But when we use ES SIT S... See more...
Hi, I am using Distributed Splunk Enterprise Deployment (at Phantom end) to ingest phantom logs into splunk. CORE SIT Search Head IP is used here and it is working fine. But when we use ES SIT Search Head IP, I get the error  - "Test connection failed for Phantom search on Host - xx.xx.xx.xx" Telnet connectivity is working fine for both CORE and ES search heads Why we are unable to connect with ES search head?  
After I set up the configuration and setting on the Gsuite app in Splunk. it's able to collect the different audit logs like admin/token log in Gsuite, but not login report. Any advice on it? Than... See more...
After I set up the configuration and setting on the Gsuite app in Splunk. it's able to collect the different audit logs like admin/token log in Gsuite, but not login report. Any advice on it? Thanks a lot.  
| set union [ search index=my_index | eval nums="1,2,3,4,5" | fields - _* | makemv delim="," nums | stats values(nums) as num ] [ search index=my_index | eval nums="2,3,4,5,6" | fields - _* | makemv ... See more...
| set union [ search index=my_index | eval nums="1,2,3,4,5" | fields - _* | makemv delim="," nums | stats values(nums) as num ] [ search index=my_index | eval nums="2,3,4,5,6" | fields - _* | makemv delim="," nums | stats values(nums) as num ] I would expect the result to be a single table with the values 1,2,3,4,5,6, but instead I just get the two datasets on top of each other:
Hello, We are integrating our on-prem Splunk (version 8.2.3) to retrieve messages from an Azure Event Hub. We have configured Linux server syslog to send to an Event Hub (Linux diagnostic extension ... See more...
Hello, We are integrating our on-prem Splunk (version 8.2.3) to retrieve messages from an Azure Event Hub. We have configured Linux server syslog to send to an Event Hub (Linux diagnostic extension 4.0). We installed the Splunk Add-on for Microsoft Cloud Services app (version 4.2.0), configured the Azure app account, and created the inputs to map to the Event Hub namespace/hub name/consumer group. We are seeing data arrive into Splunk, but it is arriving split across multiple events:   If I add all three events together, it is valid JSON for a single event:   {"body":{ "time" : "2021-11-30T23:30:01.0000000Z", "resourceId" : "/subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.Compute/virtualMachines/xxx", "properties" : { "ident" : "CRON", "pid" : "177365", "Ignore" : "syslog", "Facility" : "authpriv", "Severity" : "info", "EventTime" : "2021-11-30T23:30:01+0000", "SendingHost" : "localhost", "Msg" : "pam_unix(cron:session): session closed for user root", "hostname" : "xxx", "FluentdIngestTimestamp" : "2021-11-30T23:30:01Z" }, "category" : "authpriv", "level" : "info", "operationName" : "LinuxSyslogEvent" },"x-opt-sequence-number":100,"x-opt-offset":"77128","x-opt-enqueued-time":1638315007941}   But we need such to be a single event in Splunk in order to process data effectively! Interestingly enough, we are using the Event Hub integration to also retrieve resource diagnostic logs, and we don't see the same issue! Only when using Event Hubs for Linux diagnostics. Has anyone faced this issue, or know how to correct the problem? Thanks!
My current log monitoring splunk forwarder is indexing events in group (like sometimes more than 1 events together) but I wanted to have each event (which is own datetime at the start) to be indexed ... See more...
My current log monitoring splunk forwarder is indexing events in group (like sometimes more than 1 events together) but I wanted to have each event (which is own datetime at the start) to be indexed separately. Only the starting of event is same for each line (event) and rest of the string varies. I tried configuring the props.conf file using the following formats: LINE_BREAKER = ([\r\n]+) (though its by default but seems not working as my events are separated by newline or \r in the source log file) and then I tried as below: BREAK_ONLY_BEFORE = ^\d+\s*$  Currently it is being indexed as shown below: However, I wanted to have each entry indexed as a separate event.  Entries in source file (example) 2021-Dec-01 Wed 08:50:06.914 INFO [Thread-3] - org.eclipse.jetty.server.session - {} - doStart(DefaultSessionIdManager.java:334) - DefaultSessionIdManager workerName=node0 2021-Dec-01 Wed 08:50:06.915 INFO [Thread-3] - org.eclipse.jetty.server.session - {} - doStart(DefaultSessionIdManager.java:339) - No SessionScavenger set, using defaults 2021-Dec-01 Wed 08:50:06.917 INFO [Thread-3] - org.eclipse.jetty.server.session - {} - startScavenging(HouseKeeper.java:132) - node0 Scavenging every 660000ms 2021-Dec-01 Wed 08:50:06.956 INFO [Thread-3] - org.eclipse.jetty.server.AbstractConnector - {} - doStart(AbstractConnector.java:331) - Started ServerConnector@5e283ab9{HTTP/1.1, (http/1.1)}{127.0.0.1:22113} 2021-Dec-01 Wed 08:50:06.956 INFO [Thread-3] - org.eclipse.jetty.server.Server - {} - doStart(Server.java:415) - Started @6850ms 2021-Dec-01 Wed 08:50:24.331 INFO [pool-6-thread-1] - com.automationanywhere.nodemanager.service.impl.WindowsEventServiceImpl - {} - onMachineLogon(WindowsEventServiceImpl.java:226) - Machine Logon: 1 2021-Dec-01 Wed 08:58:35.372 INFO [pool-6-thread-1] - com.automationanywhere.nodemanager.service.impl.WindowsEventServiceImpl - {} - onMachineLocked(WindowsEventServiceImpl.java:204) - Machine Locked: 1 2021-Dec-01 Wed 09:17:38.934 INFO [pool-6-thread-1] - com.automationanywhere.nodemanager.service.impl.WindowsEventServiceImpl - {} - onMachineUnlocked(WindowsEventServiceImpl.java:214) - Machine Unlocked: 1 2021-Dec-01 Wed 09:17:38.937 INFO [pool-6-thread-1] - com.automationanywhere.nodemanager.service.impl.WindowsEventServiceImpl - {} - onMachineUnlocked(WindowsEventServiceImpl.java:216) - Session id 1 removed from tracking on machine unlock. I  would appreciate any help in configuring the props.conf file to index events  as a single entry. TIA.