All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I need to get the list of indexes with the roles on them. I use the following search: | rest /services/authorization/roles splunk_server="local" | table title, srchIndexesAllowed The r... See more...
Hi, I need to get the list of indexes with the roles on them. I use the following search: | rest /services/authorization/roles splunk_server="local" | table title, srchIndexesAllowed The result is displayed in the following format:      index1   role1                       role2      index2   role1                       role3 How do I display it in the following form:      index1 role1 role2      index2  role1 role3  Thank you!  
Hello, I'm interested in creating a chart  or dashboard to graph the rates of change for either my inputs, sources, hosts, indexes and source types. Just not sure where to start query-wise and gues... See more...
Hello, I'm interested in creating a chart  or dashboard to graph the rates of change for either my inputs, sources, hosts, indexes and source types. Just not sure where to start query-wise and guess once I have a base query I can modify it to create the others. Thanks for any help, Dan  
  I am trying to setup the splunk for infrastructure app with Collectd I am able to see system performance data under the "index=em_metrics sourcetype=em_metrics " query like it should be.    But... See more...
  I am trying to setup the splunk for infrastructure app with Collectd I am able to see system performance data under the "index=em_metrics sourcetype=em_metrics " query like it should be.    But can not add entities under "Once the script finishes running, verify your data connection."   how can I debug this problem.  I try to monitor the performance of my own splunk system ( for testing purpose )  operation system is ubuntu 20.04.2 LTS    thanks    
I'm interested in using select2  js   in splunk 8. Does anyone have any example that can help me get started. 
Is anyone else having this issue? We have upgraded to 8.1.2 on premise core Splunk and now all our windows azure inputs are no longer working. The Azure Addon app we used to configure the inputs no l... See more...
Is anyone else having this issue? We have upgraded to 8.1.2 on premise core Splunk and now all our windows azure inputs are no longer working. The Azure Addon app we used to configure the inputs no longer work. Any ideas on how to resolve this? I cannot get control the inputs.  
Counter 1 - Perfmon: Inbound Data rate Counter 2  - Perfmon:Outbound Data rate source="Perfmon:PostilionPostbridgeInterchanges" My values look something like this:  0.007956853689073186 I am look... See more...
Counter 1 - Perfmon: Inbound Data rate Counter 2  - Perfmon:Outbound Data rate source="Perfmon:PostilionPostbridgeInterchanges" My values look something like this:  0.007956853689073186 I am looking for the difference between the two counters and to reduce the number to 3 places behind the decimal point, if possible.
Hello, My teams call record data just stopped coming in through my webhook.. I have tested my webhook, it is fine.  I am getting this error.   Any ideas????
Hi, I am SE for a company, for our customers, we have log files available via an URL API, ex https://logs.company.com/api, we use a Python script to obtain our logs, the script is run on a Splunk For... See more...
Hi, I am SE for a company, for our customers, we have log files available via an URL API, ex https://logs.company.com/api, we use a Python script to obtain our logs, the script is run on a Splunk Forwarder which sends the various logs to a Splunk Enterprise. This all works fine. I have an external customer that uses this method described above but now they want to switch to Splunk Cloud. Previously, their Forwarder and Indexer were installed on premises. My question: what is best method to get the same logs into Splunk Cloud? Should we use the existing Forwarder and re-run it to point from local Enterprise to new Splunk Cloud? Or should we use the Splunk Add On Builder? What is confusing me are knowing when or not to use the Add On Builder? I hope I provided enough info for just a high level architecture decision. Why would we build and use a Splunk Add On instead of using a Forwarder?  Current: URL API Logs are extracted via a Python script (with a TOKEN) running on a local Linux Forwarder which sends the logs to a local premises Splunk Indexer. New Option 1: URL API Logs are extracted via Python script (with TOKEN) running on a local Linux Forwarder which sends the logs to Splunk Cloud New Option 2: URL API Logs are extracted via Splunk Add On (with TOKEN?) running <where?> which sends the logs to Splunk Cloud.  Thank you, Shane
Here is what I need to generate:       So far I have this search to generate everything but the right-most column: index=Production sourcetype=df host="host*" MountedOn="/b1" | stats latest(... See more...
Here is what I need to generate:       So far I have this search to generate everything but the right-most column: index=Production sourcetype=df host="host*" MountedOn="/b1" | stats latest(Size) as "Allocation", latest(Used) as "Usage" | eval Environment="Production", "Server info (LDOM)"="host1,host2", "Mount Point"="/export/backups", "Backup Filesystem"="/backup01", "Local mount"="/b1" | table Environment, "Server info (LDOM)", "Mount Point", "Backup Filesystem", "Local mount", "Allocation", "Usage" Results:         And this search generates the contents for the right-most column in the spreadsheet: index=DB sourcetype=db_size | table source, TotalSizeGB | eval TotalSizeGB=round(TotalSizeGB,2) | sort -TotalSizeGB Results:         How can I get the results of search #2 to appear in the final right-most column of search #1's results? Thank you!
I have used dashboard.css to change the splunk nav bar color and can hide the splunk app bar. My app uses the default search and when that is clicked the original splunk ui appears. How do I change t... See more...
I have used dashboard.css to change the splunk nav bar color and can hide the splunk app bar. My app uses the default search and when that is clicked the original splunk ui appears. How do I change the nav bar color for the entire app and hide the splunk app bar?
i do like to filter out Status code and Time Taken and other as fields #Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) cs(Referer) sc-status sc-subs... See more...
i do like to filter out Status code and Time Taken and other as fields #Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) cs(Referer) sc-status sc-substatus sc-win32-status time-taken 2021-02-20 06:56:41 W3SVC1 XXX 100.x.x.x HEAD / - 9004 - 10.x.x.x HTTP/1.1 - - - www.google.com:80 403 14 0 181 70 46 here status code is 403 and time-taken 46 Thanks in Advance  
Table A: <Type  LineNo="589123363" > <Type>  <OrderLine Line="6" PrimeNo="3" S="2"> Code="75602005" /> </OrderLine>   <OrderLine Line="6" PrimeNo="4" S="3"> Code="95602015" /> </OrderLine>   <Ord... See more...
Table A: <Type  LineNo="589123363" > <Type>  <OrderLine Line="6" PrimeNo="3" S="2"> Code="75602005" /> </OrderLine>   <OrderLine Line="6" PrimeNo="4" S="3"> Code="95602015" /> </OrderLine>   <OrderLine Line="6" PrimeNo="2" S="4"> Code="65602025" /> </OrderLine>   <OrderLine Line="6" PrimeNo="1" S="5"> Code="55602305" /> </OrderLine>   <OrderLine Line="6" PrimeNo="5" S="6"> Code="65602405" /> </OrderLine>   Table B: <Type  LineNo="589123363" > <Type>  <ShipLines> <ShipLine PrimeNo="2" Q="1"/> <ShipLine PrimeNo="3" Q="2"/> <ShipLine PrimeNo="4" Q="4"/> <ShipLine PrimeNo="1" Q="3"/> </ShipLines> Need to join above 2 tables with LineNo, PrimeNo and get the code. I tired but values is not coming properly in one table values are coming in  horizontal and second table values coming vertical. please help me out. Output: LineNo             PrimeNo           Code 589123363     1                      55602305 589123363     2                   65602025 589123363     3                      75602005 589123363     4                      95602015      
I have an app with dozens of searches that refer to "groupings" of indexes based on several types of criteria. I would like to have one "source of truth" for each grouping so when a index is retired ... See more...
I have an app with dozens of searches that refer to "groupings" of indexes based on several types of criteria. I would like to have one "source of truth" for each grouping so when a index is retired and new ones are added I can just edit one thing instead of dozens of searches.  I have tried using a csv lookup but am having problems getting the search to work. Here is a simplified example of the csv file: type,index_list rabbits,index="bunny_index" OR index="jackrabbit_index" stones,index="turquoise" OR index="slate" dogs,index="spaniels" OR index="hounds" all,index="bunny_index" OR index="jackrabbit_index" OR index="turquoise" OR index="slate" OR index="spaniels" OR index="hounds" mammals="bunny_index" OR index="jackrabbit_index" OR "spaniels" OR index="hounds" I have tried running: | inputlookup mycsvfile where type=rabbits   | search index_list but I get "No results found." (If I run index="bunny_index" OR index="jackrabbit_index" I do get results).  Grabbing that list of indexes to search from a centralized location would be a huge improvement over our current setup so any input would be very much appreciated.
I have a dataset that has both FINISHDATE and CHANGEDATE fields in text.  We use strptime(CHANGEDATE) for _time but we often need to filter only on FINISHDATE which means that we have to do an All ti... See more...
I have a dataset that has both FINISHDATE and CHANGEDATE fields in text.  We use strptime(CHANGEDATE) for _time but we often need to filter only on FINISHDATE which means that we have to do an All time search, followed by | where strptime(FINISHDATE)... which is super inefficient.  I am thinking that maybe an Accelerated Data Model with FINISHDATE setup as a number could help but I am not sure.  If I create this data model, how can I use it to bring back raw events quickly filtered by the FINISHDATE field? I created a calculated field like this:   FINISHDATE_ = strptime(FINISHDATE, ...)   Then I created datamodel DOUBLE_DATE like this:   Extracted field: FINISHDATE_ = Number Extracted field: ID = String    Now I have several options: I can use | from like this:   | from datamodel DOUBLE_DATE | search FINISHDATE_ > 1607299625   I can use | datamodel like this:   | datamodel DOUBLE_DATE flat | search FINISHDATE_ > 1607299625    Or I can use | tstats like this:   | tstats count FROM datamodel=DOUBLE_DATE WHERE DOUBLE_DATE.FINISHDATE_ > 1607299625 BY DOUBLE_DATE.ID | rename DOUBLE_DATE.* AS *   The problem is that I am not sure that any of these does what I need for it to do which is to leverage the secondary time field in DOUBLE_DATE.FINISHDATE_ in the foundational search (before the first pipe) so that it will BOTH return the full raw events AND be very fast.  The only way that I can see to do that is to use the last one above as a subsearch to a regular search but that would be klunky and have its own problems.  Is there some other new/schema-acceleration option that I am missing? Why don't I just use INGEST_EVAL and create an index-time field?  Because we have already done millions of events over 10 years and I am trying to avoid having to reindex them.  But if I did this and I setup fields.conf so that Splunk knows that it is an index-time field, then I would be able to use AND FINISHDATE_ > 1607299625. Why don't I just test more?  I have to wait for the admin to accelerate my datamodel and that may be a while.  I am suspecting that the | datamodel .. | search might trigger a schema-accelerated search optimization and "just work".  I will post an update after testing.
I have installed 'Proofpoint On-demand Email Security Add-on' on one of our HF. I have been experiencing a dip in the incoming emails while collecting Proofpoint logs to Splunk very frequently. This ... See more...
I have installed 'Proofpoint On-demand Email Security Add-on' on one of our HF. I have been experiencing a dip in the incoming emails while collecting Proofpoint logs to Splunk very frequently. This issue is occurring on a regular basis: We don't see any Proofpoint logs for 10 minutes (sometimes up to 3 hours) or so every now and then. Does anyone have an idea of what is the issue here?
I just configured a new device to send data to a syslog server (w/universal forwarder), but when it shows up in Splunk, the time is incorrect.  I have about 30 other devices from different vendors in... See more...
I just configured a new device to send data to a syslog server (w/universal forwarder), but when it shows up in Splunk, the time is incorrect.  I have about 30 other devices from different vendors in the same configuration that are working fine. Here's an example syslog entry: 2021-02-26T15:35:09-05:00 XYZ---Office-HQ edge[9076]: EDGE_NEW_DEVICE: New or updated client device b4:56:e3:a8:91:b5, ip 10.5.38.0 When this log entry shows up in Splunk, the _time is 3:35:09 PM (future) when it should be 10:35:09 AM.  The Splunk server (single-node) and device are both in the same time zone with me and other devices on the same syslog server are working fine. I've reviewed the following posts, but haven't had much luck How time zones are processed by Splunk Configure timestamp recognition props.conf documentation For example, I set the sourcetype to "velocloud:syslog" for the input and I tried editing the sourcetype so that the TIME_FORMAT=%Y-%m-%dT%H:%M:%S%:z Unfortunately, this hasn't had any effect.  I feel like I'm missing something simple, but I've now spent hours going through everything twice with no luck.  Any help would be appreciated.  
So when searching tag=usb, I get an message telling me : "The term 'usb*:' contains a wildcard in the middle of a word or string. This might cause inconsistent results if the characters that the wild... See more...
So when searching tag=usb, I get an message telling me : "The term 'usb*:' contains a wildcard in the middle of a word or string. This might cause inconsistent results if the characters that the wildcard represents include punctuation", but i did not add the wildard there myself. So anyone has any idea where this comes from. One of the things I could think of this comes from an add-on, somewhere..  While investigating this a little bit more I also see funky errors when searching tag=* for instance. 
I am trying to find a way to get the Microsoft Security and compliance Threat Management / Threat intelligence data into splunk. Specifically when a user reports phishing.   
Hi Team, I am currently using a trial version of Splunk cloud and trying to ingest data from another third-party tool using an HTTP event collector.  This is the endpoint in which I get to post t... See more...
Hi Team, I am currently using a trial version of Splunk cloud and trying to ingest data from another third-party tool using an HTTP event collector.  This is the endpoint in which I get to post the data using webhooks. https://inputs.prd-p-g7x4n.splunkcloud.com:8088/services/collector.  The tool which actually sends the webhook data to this endpoint is actually detecting certificate issues for the Splunk endpoint with the following error. A certificate CN name does not match the passed value. I do not have an option to bypass these SSL certificate checks.   Can someone let me know how to solve this issue? Not sure why the certificates are not maintained on the Splunk side as well. I have also attached screenshots for the SSL checks done from publicly available sites. Please let me know If need any more information from my side. Thanks, Rakesh R
Noticed that the Azure Addon is pulling the same logs repeatedly, roughly about 12 copies per log. We're using version 3.0.1 of the addon. I did see in another thread "2.0.1 fixes the duplicate alert... See more...
Noticed that the Azure Addon is pulling the same logs repeatedly, roughly about 12 copies per log. We're using version 3.0.1 of the addon. I did see in another thread "2.0.1 fixes the duplicate alert data issue" but we're still getting it. Haven't upgraded to 3.1.0 as I didn't see any mention in the release notes about this particular issue. Anyone else run into this and found a way to fix it?