All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello from Splunk Data Manager Team, We are excited to announce the preview of Data Manager for Splunk Cloud. Before you search through previous conversations looking for assistance, we want to prov... See more...
Hello from Splunk Data Manager Team, We are excited to announce the preview of Data Manager for Splunk Cloud. Before you search through previous conversations looking for assistance, we want to provide you with some basic information and quick resources. Want to access product docs? User Manual offers detailed guidance on each stage of using Data Manager.  Want to request more features? Add your ideas and vote on other ideas at Data Manager Ideas Portal  Want to search for a solution? Get answers from other Splunk customers & experts on the Data Manager Forum. Please reply to this thread for any questions or get extra help!
hi  I use geostats for the first time     | inputlookup gps3.csv | geostats latfield=Latitude longfield=Longitude count by Couleur   I need to display on the pie charts the number or the perce... See more...
hi  I use geostats for the first time     | inputlookup gps3.csv | geostats latfield=Latitude longfield=Longitude count by Couleur   I need to display on the pie charts the number or the percentge of Couleur And when i open my dashboard I also need to focus directly on my country instead the world map How to do this please? I also need something else  I need to count the "id" values field and to display it on my map but nothing happens What is wrong please? index=toto sourcetype=tutu site="x" OR site="y" OR site="z" | fields id site | stats dc(id) as id | lookup gps3.csv Ville as site output Latitude Longitude | geostats latfield=Latitude longfield=Longitude count(id) by site  
Hi Splunk folks,   My team is seeing a pesky issue with Palo Alto logs where a small subset are not being sourcetyped into pan:traffic/threat, etc. As the pan:log is the default, we have a few logs... See more...
Hi Splunk folks,   My team is seeing a pesky issue with Palo Alto logs where a small subset are not being sourcetyped into pan:traffic/threat, etc. As the pan:log is the default, we have a few logs that keep this sourcetype. We have attempted to regex and nullQueue out the remainder of the pan:log logs, but no success. When we implement this TRANSFORMS/Props.conf entry, we place it at the end as we understood the order followed a left to right priority. Example of a log that is being sourcetyped as "pan:log", and we would want to drop. It seems as if this is fragmented from Syslog, but nonetheless, junk to us.   000-1823048e98,0,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,2021-11-10T10:04:03.905+00:00,,,infrastructure,networking,network-protocol,3,"used-by-malware,has-known-vulnerability,pervasive-use",,dns,no,no,0   Palo Alto props.conf: [pan_log] pulldown_type = false SHOULD_LINEMERGE = false TIME_PREFIX = ^(?:[^,]*,){5} MAX_TIMESTAMP_LOOKAHEAD = 100 TRANSFORMS-sourcetype = pan_threat, pan_traffic, pan_system, pan_config, pan_hipmatch, pan_correlation, pan_userid, pan_globalprotect, pan_decryption   We added a new nullQueue entry into the transforms.conf and then inserted the pan_discard after the last entry in the transforms above. [pan_discard] REGEX = . DEST_KEY = queue FORMAT = nullQueue   Any suggestions?
Hello Team, I am new to splunk and we have integrated splunk with OCP logs. I am able to see the logs but not sure how to write query for example to fetch pods's active status,CPu utilization ,meor... See more...
Hello Team, I am new to splunk and we have integrated splunk with OCP logs. I am able to see the logs but not sure how to write query for example to fetch pods's active status,CPu utilization ,meory utilization,API response time.Any help would highly appreciated. Thanks in advance
I'm a bit confused. I tried to colour one of the columns in my table by the range of values in that table. And it works. Almost. The problem is that colouring works separately for each page of re... See more...
I'm a bit confused. I tried to colour one of the columns in my table by the range of values in that table. And it works. Almost. The problem is that colouring works separately for each page of results which is a bit ridiculous.   These are two pages from the same search. Kinda ridiculous, isn't it? Is it that I'm doing something wrong? But it seems that there's no simpler way than chose "range", preset... But how to make it work across whole results range (so my first screen shoud be mostly orange-ish (sorry, don't know how this colour is called in english), and the second should be blue)?
in deployment-apps i created apps without local folder after reloading server class local folder automatically generates on client server. any idea why splunk generates local folder in app/
Hi, How can I find the total number of the batch jobs running for a time frame or for any instance and show them in the dashboard for any particular CSV file?    Please help me in this.  
What does the error below mean and how to remediate it? This is after running `splunk restart splunkweb`   HTTP/1.1 404 Not Found
I've setup an alert , where i'm saying send alert as soon as 1 record is found. But actually i want to wait for few more events to happen in the next 5 minutes. I want my alert to wait for 5 minutes ... See more...
I've setup an alert , where i'm saying send alert as soon as 1 record is found. But actually i want to wait for few more events to happen in the next 5 minutes. I want my alert to wait for 5 minutes and collect all the events, and then send report. Is there a way to make my alert wait until it fetched all the events that'll happen in the next five minutes?
Hello , Following the upgrade of our corporate version of Splunk, we noticed a recurring problem with IOwait (for your information, we went from version 8.1.0.4 to 8.2.2.1). Below are the charact... See more...
Hello , Following the upgrade of our corporate version of Splunk, we noticed a recurring problem with IOwait (for your information, we went from version 8.1.0.4 to 8.2.2.1). Below are the characteristics of the machine that runs Splunk in standalone: - 4 CPUs - 8 GB of RAM - 50 GB of disk space We did not have this alert on previous version 8.1.0.4, can you give us the reason for the alert since this version upgrade ? Best regards.
Hello , Goal of recovering in PowerBi, datas stored in Splunk, we want to install the certificate of our company on port 8089 for communication with the Splunk API. Please give us the procedure t... See more...
Hello , Goal of recovering in PowerBi, datas stored in Splunk, we want to install the certificate of our company on port 8089 for communication with the Splunk API. Please give us the procedure to do this, knowing that we are on version 8.2.2.1 and that our machine is a development machine (standalone) which communicates with a remote LM on port 8089 of the API. (FYI, this company certificate is already used for the Splunk web interface on port 8000) Best regards.
Hi Guys Wanted to know if anyone knows if you can populate a summary index from a data model. the summary index query requires the si* prefix to transforming commands (sistats) but the datamodel sea... See more...
Hi Guys Wanted to know if anyone knows if you can populate a summary index from a data model. the summary index query requires the si* prefix to transforming commands (sistats) but the datamodel search also requires a tstats command so they cannot be used in conjunction  
I'm trying to fetch the logs to Splunk from AWS Cloudtrail using Splunk Addon for AWS. When I checked the s3 bucket size it shows only 2GB data. But if I enable the Cloudtrail input in Addon, the Spl... See more...
I'm trying to fetch the logs to Splunk from AWS Cloudtrail using Splunk Addon for AWS. When I checked the s3 bucket size it shows only 2GB data. But if I enable the Cloudtrail input in Addon, the Splunk index is consuming over 3 or 4 GB. My configuration is correct in the addon input and I'm only getting the logs in Splunk from the data range that I specified in the addon. Is this something related to the compression of data in AWS and Splunk are different. Please help to resolve this.
Hi team,    Please help with the regex to fetch the values from below payload -  serverName, HostNumber.  "{\n \"process\": \"Monitoring\",\n \"serverName\": \"\",\n \"HostNumber\": \"\",\n \"star... See more...
Hi team,    Please help with the regex to fetch the values from below payload -  serverName, HostNumber.  "{\n \"process\": \"Monitoring\",\n \"serverName\": \"\",\n \"HostNumber\": \"\",\n \"startDate\": \"\",\n \"firstName\": \"person _anusha1234\",\n \"lastName\": \"dev1234\",\n \"emailAddress\": \3423423213213@ada.com\,\n \"personnelNumber\": \"812060\",\n \"status\": \"FAILED\",\n \"code\": \"APP:Login_ERR001\",\n \"message\": \"\\\"218177-123132 does not exist in LDAP\\\"\",\n \"errorRecord\": \"BirthName:|BirthPlace:|ActiveFlag:|Role:PINV|Environment:|PrimaryInRole:|HostNumber:|Reference:123132|AlternateNumber:3223|serverName:abc|SubjectStatus:|CreatedBy:|Department:|DiscontinuedDate:|DiscontinuedFlag:N|EmployeeID:|EmployeeStatus:|EmploymentReason:|EmploymentType:|EndDate:|StartDate:|FirstName:person _anusha1234|LastName:dev1234|NPINumber:812060|AddressLine1:|AddressLine2:|AddressLine3:|City:|CountryISO3Code:|Province:|Zip:|EmailAddress:3423423213213@ada.com|Fax:|PhoneNumber:|PersonnelNumber:812060\"\n}\",\n \"tracePoint\" : \"FLOW\",\n \"priority\" : \"INFO\",\n \"category\" : \"com.wipro.api\",\n \"elapsed\" : 1893,\n \"timestamp\" : \"2021-11-09T11:19:53.943Z\",\n \"applicationName\" : \"Monitoring\",\n \"applicationVersion\" : \"v1\",\n \"environment\" : \"Stage\",\n \"threadName\" : \"[MuleRuntime].uber.70: [wipro-prc-primary-monitoring].log-data-for-splunk-dashboard/processors/1.ps.BLOCKING @a8ff6c0\"\n}   Thanks in Advance RK
hi I use a lookup in order to do a correspondance between the field web_error_code which is my sourcetype and which is an error code and the name of th error code It works perfectly but I just need... See more...
hi I use a lookup in order to do a correspondance between the field web_error_code which is my sourcetype and which is an error code and the name of th error code It works perfectly but I just need to complete the legend timechart with the original web_error code for example I need "400 - Bad gateway", actually I have just bad gateway How to do this please   index=toto sourcetype=tutu web_domain="*" | lookup HttpCode.csv status as web_error_code output status_description | timechart span=1d sum(web_error_count) as web_error_count by status_description    
In props.conf, set the TRANSFORMS-null attribute: [ActiveDirectory] TRANSFORMS-null= setnull Create a corresponding stanza in transforms.conf. Set DEST_KEY to "queue" and FORMAT to "nullQueue":... See more...
In props.conf, set the TRANSFORMS-null attribute: [ActiveDirectory] TRANSFORMS-null= setnull Create a corresponding stanza in transforms.conf. Set DEST_KEY to "queue" and FORMAT to "nullQueue": [setnull] REGEX = \[ms_Mcs_AdmPwdExpirationTime\] DEST_KEY = queue FORMAT = nullQueue Restart Splunk Enterprise. field = ms_Mcs_AdmPwdExpirationTime the values ​​are still in the index Not working.  what did I indicate wrong?
I have list of servers, I need a query to check whether splunk is getting data from the server or not ??
log sources coming in from UniversalForwarderto Heavyforwarder looking to selectively forward to syslog without indexing on the heavyforwarder or index cluster, these selective logs need to only forw... See more...
log sources coming in from UniversalForwarderto Heavyforwarder looking to selectively forward to syslog without indexing on the heavyforwarder or index cluster, these selective logs need to only forward to syslog central logging system only syslog output is working already on the heavyforwarder and indexing on the heavyforwarder is disabled, but events are being indexed on the index host/cluster, is there a configuration/deployment where HeavyForwarder selectively only forwarded to syslog without any indexing ?    
Hi,    I have the following alert set up:  query (roughly):      index="myindex" "the log message that i am interested in" | head 1     Alert type: Scheduled, Run on Cron Schedule */5 * * * ... See more...
Hi,    I have the following alert set up:  query (roughly):      index="myindex" "the log message that i am interested in" | head 1     Alert type: Scheduled, Run on Cron Schedule */5 * * * * (every 5 minutes) for the last 10 minutes. Trigger alert when Number of Results is less than 1. I also added throttling and configured an email to be sent when the alert triggers.    It works fine most of the time, but in irregular intervals, I receive alerts. Some of them are valid, because there just were no events. For others, the saved search doesn't show events, but starting the same search again, will show that there actually were events logged, so I'd consider those alerts false positives. My first idea was that delays in the forwarding and indexing processes lead to this result, but when by checking the _indextime field, this explanation can be ruled out as well.  Does anyone have an idea why these false positives  appear and how I can get rid of them?    Thanks in advance! 
how to create alert custom message in the application log