All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How is Splunk Cloud admin certification different from Splunk sys admin?
Good morning, I am very new to the dashboard world and would be delighted to see any examples you might have! I have the panel download and just need to figure out the search string. Any help would b... See more...
Good morning, I am very new to the dashboard world and would be delighted to see any examples you might have! I have the panel download and just need to figure out the search string. Any help would be greatly appreciated!
I am trying configuring gmail smtp but when sending test email using below command getting below error.  Any help will be appreciated.  Command: index=_internal | head 1 | sendemail to="XXXXXXXX@g... See more...
I am trying configuring gmail smtp but when sending test email using below command getting below error.  Any help will be appreciated.  Command: index=_internal | head 1 | sendemail to="XXXXXXXX@gmail.com" format="html" server=smtp.gmail.com:587 use_tls=1 command="sendemail", (530, b'5.7.0 Authentication Required. Learn more at\n5.7.0 https://support.google.com/mail/?p=WantAuthError w22-20020a1709027b9600b0019a593e45f1sm196622pll.261 - gsmtp', 'splunk@ip-172-31-36-251.ap-south-1.compute.internal') while sending mail to: @XXXXXXX@gmail.com
Hi!  Is it possible to create a link in DB alert messages (Email and HTTP) that leads directly to the metric browser for the metric that deviated from the norm during the specified time period? B... See more...
Hi!  Is it possible to create a link in DB alert messages (Email and HTTP) that leads directly to the metric browser for the metric that deviated from the norm during the specified time period? Because the "View Dashboard During Health Rule Violation" option only goes to the main DB dashboard.
Hello , I need help with finding or configuring my splunk server with a ip address so i can send syslogs from my pfsense device to the splunk server and local address (127.0.0.1) will obviously not w... See more...
Hello , I need help with finding or configuring my splunk server with a ip address so i can send syslogs from my pfsense device to the splunk server and local address (127.0.0.1) will obviously not work. I have looked every where and the solutions are not clear i dont know why with a such friendly and beautiful software tool like splunk there is no option to configure an ip address or even find it also i am using windows 10 if anyone can help me i would really appreciate it.
"eventHandlers": [                 {                     "type": "drilldown.customUrl",                     "options": {                         "colId": "dstip",                         "url": ... See more...
"eventHandlers": [                 {                     "type": "drilldown.customUrl",                     "options": {                         "colId": "dstip",                         "url": "https://example.com/$row.dstip.value$",                         "condition": "return event.columnId === 'dstip' && event.rawDataRow[event.columnId];",                         "newTab": true                     }                 },                 {                     "type": "drilldown.customUrl",                     "options": {                         "colId": "dst_hostname",                         "url": "https://example.com?query=$row.dst_hostname.value$",                         "condition": "return event.columnId === 'dst_hostname' && event.rawDataRow[event.columnId];",                         "newTab": true                     }                 }             ] I am using drilldown in the table graph of Dashboard studio, and when there is a row in the table, I want to make the drilldown corresponding to the column only occur when I click the value of the column. How do I do that? Drilldown of dst_hostname occurs even when the value of dstip is clicked, and drilldown occurs even when the value of a column that is not set is clicked.
Hi Guys...    I have a scheduled search (Report) running a query with earliest=-2h@m latest=now. I have redirected the output to a Summary index. However, the output is The output is being redire... See more...
Hi Guys...    I have a scheduled search (Report) running a query with earliest=-2h@m latest=now. I have redirected the output to a Summary index. However, the output is The output is being redirected to the index but has the _time as the begining of the search time.    The output has _time field from the search I run but this field is not being considered while indexing the data. Any suggestions on how to use my _time field to index instead the search start time.    Thanks in advance
How to limit on uploading of CSV lookup by size. Hi, I want to restrict the upload of lookup files with a certain size, like let's say a user with some role has permission of uploading a lookup f... See more...
How to limit on uploading of CSV lookup by size. Hi, I want to restrict the upload of lookup files with a certain size, like let's say a user with some role has permission of uploading a lookup file, but I want to filter it in that role so that the user can able to upload or create a CSV lookup file but with 5MB not more than that.
We are monitoring when a single KV store lookup surpasses 25 GB in size AND when the total of all KV store collections surpasses 100 GB in size.  Time and time again I am seeing collections over... See more...
We are monitoring when a single KV store lookup surpasses 25 GB in size AND when the total of all KV store collections surpasses 100 GB in size.  Time and time again I am seeing collections over 25 GBs and the Total surpasses 100 GBs for many different unique environments. The following doesn't appear to be true. What are the actual limits for both a single KV Store lookup and the total of all KV Store lookups? - Can we query them? We want to prevent any KV store crashes.  Thank you.
Is it possible to control what API requests a role is allowed to make? For example can I only restrict a role to be able to see all saved searches servicesNS/-/-/saved/searches?
Hello,       We are trying to ingest JSON based messages from an AWS SQS topic.    When ingesting the messages we are finding extra added json around the actual Message we are trying to ingest.  The... See more...
Hello,       We are trying to ingest JSON based messages from an AWS SQS topic.    When ingesting the messages we are finding extra added json around the actual Message we are trying to ingest.  The extra JSON is automatically added in by AWS SQS.  The actual Message we want to ingest has the xpath of  "?BodyJson?Message".    Can we configure the Splunk TA to pull the SQS Messages off the topic but apply some type of xpath or transform to only ingest the Message (?BodyJson?Message).     See screenshot below.  While pulling the message off the SQS topic we only want the message in the green rectangle.   but its buried in all the other json.... Actual JSON to whole message above in screenshot. { "MessageId": 23411111111444, "ReceiptHandle": "y", "MD5OfBody": 23411333333333111111444, "Body": "{\n \"Type\" : \"Notification\",\n \"MessageId\" : \"xxxxxxx-xxx-xxxxxx\",\n \"TopicArn\" : \"arn:topic123\",\n \"Message\" : \"{\\\"timestamp\\\": \\\"1680882420000\\\", \\\"metric_name:test\\\": \\\"0\\\", \\\"aggregation\\\": \\\"avg\\\", \\\"resolution\\\": \\\"1m\\\", \\\"unit\\\": \\\"Percent\\\", \\\"entity.id\\\": \\\"SERVICE-12345\\\", \\\"entity.name\\\": \\\"test\\\", \\\"source.name\\\": \\\"testsource\\\"}\",\n \"Timestamp\" : \"2023-04-07T15:56:02.509Z\",\n \"SignatureVersion\" : \"1\",\n \"Signature\" : \"23423423423\",\n \"SigningCertURL\" : \"https://sns.u234234234234234234\",\n \"UnsubscribeURL\" : \"https://sns.23423423423423423423\"\n}", "Attributes": { "SenderId": "xxxxxxxxxxxxxxx", "ApproximateFirstReceiveTimestamp": "1680882978026", "ApproximateReceiveCount": "1", "SentTimestamp": "1680882962536" }, "BodyJson": { "Type": "Notification", "MessageId": "xxxxxxxxxxxxxxxxx", "TopicArn": "arn:aws:sns:us-east-1:996142040734:APP-4498-dev-PerfEngDynatraceAPIClient-DynatraceMetricsSNSTopic-qFolXGcy2Ufh", "Message": "{\"timestamp\": \"1680882420000\", \"metric_name:test\": \"0\", \"aggregation\": \"avg\", \"resolution\": \"1m\", \"unit\": \"Percent\", \"entity.id\": \"SERVICE-12345\", \"entity.name\": \"test\", \"source.name\": \"testsource\"}", "Timestamp": "2023-04-07T15:56:02.509Z", "SignatureVersion": "1", "Signature": 23423423423, "SigningCertURL": "https://sns.u234234234234234234", "UnsubscribeURL": "https://sns.23423423423423423423" } }
I know my customer POC can download the Splunk ODBC driver individually, but what restrictions (if any) are there on making the driver available to their enterprise users?
  I have created a dashboard with  incident count and some other details count in different single value visualization. so i need to text - align the title of single value left or center. could any... See more...
  I have created a dashboard with  incident count and some other details count in different single value visualization. so i need to text - align the title of single value left or center. could anyone can help me here.
Hi everyone, My post is huge. sorry for that. I need suggestion from you for the query I framed. I have 2 lookup used (lookfileA, lookfileB) column: BaseA > count by division in lookupfileA column... See more...
Hi everyone, My post is huge. sorry for that. I need suggestion from you for the query I framed. I have 2 lookup used (lookfileA, lookfileB) column: BaseA > count by division in lookupfileA column: Column_IndexA > to compare lookfileA under indexA and get matching host count column: BaseB > count by division in lookupfileB column: Inscope > count by division in lookupfileB with Active status column: Column_OtherIndexes > to compare lookfileB under otherindexes and get matching host count index=indexA | lookup lookfileA host as hostname OUTPUTNEW Division | fields hostname,Division | stats dc(hostname) as "Column_IndexA" by Division | append [| tstats count where index IN ("win","linux") by host | eval host=upper(host) | fields - count | join type=inner host [| inputlookup lookfileA | fields host, Division | eval host=upper(host)] | stats count as "Column_OtherIndexes" by Division] | append [| inputlookup lookfileA | stats count as "BaseA" by Division] | append [| inputlookup lookfileB | stats count as BaseB by category | where category IN ("Win","Linux") | rename category as Division] | append [| inputlookup lookfileB | stats count as Inscope by category,status | where category IN ("Win","Linux") AND status="Active" | rename category as Division] | fields Division,BaseB,Inscope,"Column_OtherIndexes","BaseA","Column_IndexA" | stats values(*) as * by Division | table Division,BaseB,Inscope,"Column_OtherIndexes","BaseA","Column_IndexA" | eval Difference="Column_IndexA" - "Column_OtherIndexes" | fillnull value=0 | addtotals col=t row=f labelfield=Division label=Total Below is the sample output and I need to get difference column. Used eval command but getting error Division BaseB Inscope Column_OtherIndexes BaseA  Column_IndexA Difference M 300 200 50 300 200 200-50 N 200 100 20 300 200 200-20 Total 500 300 70 600 400 400-70
I have a field named start_time on an artifact, and trying to send a mail to a team. But if I just choose the API name, it send the epoch time. It needs to be in the Readable format. Any child playbo... See more...
I have a field named start_time on an artifact, and trying to send a mail to a team. But if I just choose the API name, it send the epoch time. It needs to be in the Readable format. Any child playbook or custom function for it please
Hi Folks, We have a complaint from stakeholders that they are seeing duplicate events in Splunk. they shared few examples where same events were indexed hundreds of times, sometimes in thousands. I ... See more...
Hi Folks, We have a complaint from stakeholders that they are seeing duplicate events in Splunk. they shared few examples where same events were indexed hundreds of times, sometimes in thousands. I can confirm that there are no duplicate stanzas to monitor log files in inputs.conf. I checked the actual log files and could see some events were duplicated in source file itself. count of it was 12 however in splunk same event was indexed 524 times. We see there are lot of inconsistencies in how the logs are being written at source. we see timestamps are either missing or partial. could that be the reason splunk is kind of going in bad state and re reading the log files?  This is how my inputs.conf is configured.  [monitor://\\hostname\log] recursive = false disabled = false followTail = 0 host = host_regex = ^.*\d-(.+?)-\d+-\d{8}_?\d*\. ignoreOlderThan = 8d index = index sourcetype = sourcetype whitelist = \.log|\.out time_before_close = 1 initCrcLength = 4096
Could someone help me with such a query? I am running a scheduled search every 30 minutes which aims to find duplicate registrations from the last 30 minutes, that were also used when compared to the... See more...
Could someone help me with such a query? I am running a scheduled search every 30 minutes which aims to find duplicate registrations from the last 30 minutes, that were also used when compared to the last 4 hours.    Since it runs search every 30 minutes, I cannot just search using a 4 hour window, else it will keep triggering an alert every 30 minutes for 4 hours basically.    index=myindex userRegistration earliest=-4h latest=now |stats count by dc(userName) as UserCount | where UserCount>1
  here we are getting pop up message whenever we direct from one dashboard to another. Is there any way we can remove it permanently from backend?  
I have the latest version of PCI Compliance installed. But when accessing the Report of the Requirement, the Panel notices: "Search is waiting for input ".  I have tried reinstalling both PCI Compli... See more...
I have the latest version of PCI Compliance installed. But when accessing the Report of the Requirement, the Panel notices: "Search is waiting for input ".  I have tried reinstalling both PCI Compliance and ES but the problem is still not fixed.  Anyone have a solution to this problem? Thank
I am setting up a Splunk Stream. I am having trouble with the official instructions, which are very confusing for a beginner. Below is the environment that has already been set up. Server A XAMPP... See more...
I am setting up a Splunk Stream. I am having trouble with the official instructions, which are very confusing for a beginner. Below is the environment that has already been set up. Server A XAMPP DVWA UF(ver9.0.4) Server B Splunk(ver9.0.4) Stream(8.1.0) → to be installed I would like to deploy Stream on server B to analyze DVWA logs sent from UF on server A. Can someone please itemize and explain the necessary steps? I know this is a rudimentary question, but please help.