All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all Some how splunk_essentials_8_2 directopry got removed from this directory /opt/splunk/etc/apps .later i replicated this directory from other instance.But i see the below error.can some one he... See more...
Hi all Some how splunk_essentials_8_2 directopry got removed from this directory /opt/splunk/etc/apps .later i replicated this directory from other instance.But i see the below error.can some one help with this. Validating installed files against hashes from '/opt/ee_splunk/splunk/splunk-8.2.0-e053ef3c985f-linux-2.6-x86_64-manifest' File '/opt/ee_splunk/splunk/etc/apps/splunk_essentials_8_2/default/app.conf' changed. Problems were found, please review your files and move customizations to local      
Hi All- Good Morning! Need quick help to find spike of counter from last hour to current hour if it's 20X then it should trigger. Using mstats and mcatalog. Thanks in advance.   Currently i ha... See more...
Hi All- Good Morning! Need quick help to find spike of counter from last hour to current hour if it's 20X then it should trigger. Using mstats and mcatalog. Thanks in advance.   Currently i have pasted sample query to get the view over logging structure. | Mcatalog values (stack) as stack values(node) as node values(db) as db values(inst) as inst where index= main and [| mstats avg(_value) as value where index=main and counter=Subject or( counter= wells) metric_name=metrices by inst counter span=10m |Streamstats global=f window=2 range(value) as value by inst counter                
Dear all, There are 5 columns in one chart output and the output used to create one dashboard panel. When enable the drill down function for this dashboard, all cells for 5 columns appear as click-... See more...
Dear all, There are 5 columns in one chart output and the output used to create one dashboard panel. When enable the drill down function for this dashboard, all cells for 5 columns appear as click-able status. Is  there any way to disable the click or disable cells selection for particular column? So user cannot click the cells for that column, or even click nothing happen from system behavior point of view. Thanks.
I created 3 dashboard A , B, C A is main dashboard and B & C is sub dashboard. Is there any way to show A only and hide B/C in Dashboards list? Thanks.
I hope everyone is having a great time today, I am here to first thank you guys for being so helpful and assertive! you people rock! and second to ask for assistance regarding a regular expression. ... See more...
I hope everyone is having a great time today, I am here to first thank you guys for being so helpful and assertive! you people rock! and second to ask for assistance regarding a regular expression. I have a field that will contain a string that will start by "check-in unavailable due to external cause the ref code is ##AIUI- 989 K-IOJ##" I want to be able to extract the string that is between the "##"  but... sometimes this field may have a string that starts by "the auth was..." I want to be able to extract any string   between two "#" whenever the value of the field starts with  "check-in unavailable due to external cause the ref code is"     for example  if I have this: FIELD CODE "check-in unavailable due to external cause the ref code is ##AIUI- 989 K-IOJ## AIUI- 989 K-IOJ "the auth was denied code ## uik-55855##" N.A   thank you guys SO MUCH   Kindy, Cindy
I have Splunk Ent & ES on AWS, The DNS is not resolving all Server names with IPs. I get partial list of Hosts. The list I get, some Hosts are missing, the hosts shown, some don't have IPs. Please he... See more...
I have Splunk Ent & ES on AWS, The DNS is not resolving all Server names with IPs. I get partial list of Hosts. The list I get, some Hosts are missing, the hosts shown, some don't have IPs. Please help with any ideas. Thank u in advance. 
Hi.   I have an event that has the line "Total time taken for process: 535 ms" in it.   it's not in a field it's just a raw event. I want to extract just the 535 ms from it, and so I came up with... See more...
Hi.   I have an event that has the line "Total time taken for process: 535 ms" in it.   it's not in a field it's just a raw event. I want to extract just the 535 ms from it, and so I came up with this.   index = *"1500"* "Total time taken for process:" | regex _raw "\d+ ms"  its the correct regular expression any number of digits followed by space followed by ms but its not working in splunk, and I am not sure why. it keeps throwing error   Usage: regex <field> (=!=) <regex> I am not sure what this means. 
How to resolve "the max number of concurrent historical searches on this instance has been reached" on Skipped search? Is it because in the Event  the user is listed as "nobody" or is it a timing iss... See more...
How to resolve "the max number of concurrent historical searches on this instance has been reached" on Skipped search? Is it because in the Event  the user is listed as "nobody" or is it a timing issue please? Thank u in advance.
i have my ecs fargate task to send log to Splunk, log arrives just fine in Splunk. i'd like to include additional log files (different log format), maybe tag them differently, or use labels... is t... See more...
i have my ecs fargate task to send log to Splunk, log arrives just fine in Splunk. i'd like to include additional log files (different log format), maybe tag them differently, or use labels... is there any way to send multiple log files from a task running in fargate to Splunk and be able to see all these logs in Splunk separately? in my case there are 3 tomcat logs that i care about, catalina log lands fine in Splunk but i'd like to add two more logs
index=phantom_container AND owner!=null AND close_time!=null | eval st=strptime(create_time, "%Y-%m-%dT%H:%M:%S") | eval et=strptime(close_time, "%Y-%m-%dT%H:%M:%S") | eval Dur=(et-st)/60 |table ... See more...
index=phantom_container AND owner!=null AND close_time!=null | eval st=strptime(create_time, "%Y-%m-%dT%H:%M:%S") | eval et=strptime(close_time, "%Y-%m-%dT%H:%M:%S") | eval Dur=(et-st)/60 |table create_time close_time Dur id container_label owner_name Here is the basic search, now I would like to find the average amount of time between create_time and close_time per owner_name. 
The automatic setup in this app will not complete (Overview shows not configured).  Getting Data In dashboards will never complete its lookup creation (even though the lookups were created). Also,... See more...
The automatic setup in this app will not complete (Overview shows not configured).  Getting Data In dashboards will never complete its lookup creation (even though the lookups were created). Also, the "Check Data" portion always shows the indexes created by are missing data. Admin user has all these indexes configured for default search.     Is there a checkbox missing somewhere that I haven't selected or something?  This screen will remain as is for days:   If it is completed and the app is done being setup, how do the dashboards get replaced with info from this app in the Splunk App For Windows Infrastructure?
So in python coding you can use rrule to assign weekends in weeks and subtract them from your calculation.  I ask because I am getting a Ticket open dat, and a ticket close date and I am attempting t... See more...
So in python coding you can use rrule to assign weekends in weeks and subtract them from your calculation.  I ask because I am getting a Ticket open dat, and a ticket close date and I am attempting to determine SLA values based on working days (i.e.; we are not open weekends and are only open 6am-6pm) for tickets that span nights or weekends, how can i remove those time values dynamically for data that is being automatically pulled from a ticket system and not using a static value like an excel spreadsheet.  i.e.; This needs to be able to continue updating as time goes....  Bonus points if you can help account for a 6am - 6pm workday for SLA timers... Bonus bonus if you know how to exclude holidays, LOL
I cannot figure out which component to enable HEC and where to send the events. We have an on prem Splunk Enterprise distributed configuration with a Deployment server, Indexer and SearchHead. We als... See more...
I cannot figure out which component to enable HEC and where to send the events. We have an on prem Splunk Enterprise distributed configuration with a Deployment server, Indexer and SearchHead. We also have an independent "sandbox" environment for testing where I'm trying to set this up. Sandbox is 1 server with the whole Splunk Enterprise installation, however we do use the deployment server to setup and configure the sandbox universal forwarders, etc.  I setup HEC tokens on the sandbox and could not figure out how to get it working. I am testing using Curl commands. I then added HEC tokens on the deployment server and still testing with Curl, cannot figure out how to send events to it.  I get these errors: 1) Sending curl to sandbox URL with either deployment server HEC token or sandbox HEC token "The requested URL was not found on this server.","code":404 2) Sending curl to indexer URL with either deployment server HEC token or Sandbox HEC token Failed to connect to spidxa.open-techs.local port 8088: Connection refused 3) Sending curl to deployment server URL with either deployment server HEC token or Sandbox HEC token Failed to connect to spmgta.open-techs.local port 8088: Connection timed out 4) Sending curl to SearchHead URL with either deployment server HEC token or Sandbox HEC token, and this is likely a firewall issue, but it doesn't make sense to me to send the event to the search head, so I haven't pushed security to open this port. Failed to connect to spsha.open-techs.local port 8088: No route to host This is my curl command with escaped double quotes and {variable substitutions} curl -g -k --location --request POST 'https://#{server I am testing}:8088/services/collector/event' --header "Authorization: Splunk {token}" --header "Content-Type: text/plain" --data-raw "{\"event\": \"Test kong_dev\"}" Can anybody tell me which components do which part of the HEC event collection? The introspection\http_event_Collector_metrics.log on both deployment and sandbox just show one minute intervals of 0 transactions going through there.   
I have a Splunk Enterprise deployment. I want to get Windows logs in (Application, system). I am using the Windows TA for Splunk (https://splunkbase.splunk.com/app/742/) because I want it's field ex... See more...
I have a Splunk Enterprise deployment. I want to get Windows logs in (Application, system). I am using the Windows TA for Splunk (https://splunkbase.splunk.com/app/742/) because I want it's field extractions/transforms. I also have my own TA with a custom inputs.conf which I use to specify which logs I want to collect. So 2 apps Windows TA - for transforms My custom app - for the inputs.conf The Windows TA app has been installed on the deployment clients via the server. The stanzas in inputs.conf have `disabled=1` . The custom app has also been deployed via the deployment server with all stanzas in inputs.conf having `disabled=0`.  There is no TA installed on the Splunk server, only on the UF (deployment client). Hence, I _assume_ that it will collect the logs specified in my custom inputs.conf and apply the transforms from the Windows TA. Below is a snippet of my custom inputs.conf       [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 renderXml=true evt_resolve_ad_obj = 1 index = windows [WinEventLog://System] disabled = 0 renderXml=true evt_resolve_ad_obj = 1 index = windows [WinEventLog://Application] disabled = 0 renderXml=true evt_resolve_ad_obj = 1 index = windows         However, in Splunk, I see the data but without any extractions as below I tried to set a stanza in the Windows TA to `disabled=0` and I still don't no extractions. Do I need to enable the transforms somewhere? Or any ideas why extraction isn't happening for me regardless of whether I use my custom inputs.conf or the Windows TA inputs.conf? Additionally, can I ask - how does one know whether an app is to be installed on the server or indexer or UF/HF, etc.? I did not find any indication on the app page of this. Hence here, I've only installed the app on the UF.  
I am trying to create a map visualization from a list of data that has the the physical address of the event in a filed named 'location'  | inputlookup data.csv | table location | Example data Ea... See more...
I am trying to create a map visualization from a list of data that has the the physical address of the event in a filed named 'location'  | inputlookup data.csv | table location | Example data Earth Wytheville, VA Boston, MA 1 Main St, Waltham, Massachusetts Mexico City, Mexico Wellington St, Ottawa, ON K1A 0A9, Canada I want to talk these physical addresses and add them to the Map Visualization in Splunk, but am not seeing how to add the data to the chart.   
We keep getting this "empty" log back whenever we do a search within this host/sourcetype. It doesn't seem to matter what other search terms we put in, it always comes up.  As raw text: So ... See more...
We keep getting this "empty" log back whenever we do a search within this host/sourcetype. It doesn't seem to matter what other search terms we put in, it always comes up.  As raw text: So far none of our admins can seem to figure out where it's coming from or why. Anyone have any bright ideas?  
Hi, I have a MV field that I need to split apart into other mv fields Here is the result of the query   What I want it to look like is   I've been fighting with MV commands but nothing ... See more...
Hi, I have a MV field that I need to split apart into other mv fields Here is the result of the query   What I want it to look like is   I've been fighting with MV commands but nothing seems to work quite like I wanted it to sooooo I figured I'd raise my hand and ask the Splunk Wizards  
I have a production equipment storing a log that I can access through FTP. I installed FTP Pull and set up an input and it works OK that far. However, the file format is a bit odd, so simply taking i... See more...
I have a production equipment storing a log that I can access through FTP. I installed FTP Pull and set up an input and it works OK that far. However, the file format is a bit odd, so simply taking it in is not enough. (It has a special timestamp that Splunk does not interpret correctly out of the box, and there is no header line in the file). I have created a new sourcetype where I configured timestamp format and field names. When I upload the file manually and apply that particular sourcetype, data is indexed properly. I selected this sourcetype in the FTP Input configuration but it does not seem to take effect. The indexed events get this selected sourcetype associated, but the configuration of the sourcetype is not observed, so when the file comes through FTP, it is indexed incorrectly. Is there a way to enforce the FTP Input to actually apply the configuration of the selected sourcetype? Thanks in advance for sharing your thoughts or experience with me
Good morning community, I find the following problem, a few days ago I stopped receiving json logs from the oracle cloud casb. The only events I receive are the following in the image. Some kind of... See more...
Good morning community, I find the following problem, a few days ago I stopped receiving json logs from the oracle cloud casb. The only events I receive are the following in the image. Some kind of tip for me to investigate? The only events I receive are the following in the image. Some kind of tip for me to investigate?
Hello Team . I need an advice regarding an issue I experience. I have a Heavy Forwarder witch collects data from eventhub with sourcetype mscs:azure:eventhub. On HF level I transform the data from ... See more...
Hello Team . I need an advice regarding an issue I experience. I have a Heavy Forwarder witch collects data from eventhub with sourcetype mscs:azure:eventhub. On HF level I transform the data from a specific provider to override the sourcetype from mscs:azure:eventhub to mscs:azure:eventhub:databricks.  The format of the logs is json . There is one filed in the logs  properties.requestParams which has the nested key values (depending the log everytime the key is different and not the  same keys for all logs ). The field properties.requestParams  raw format is like "properties.requestParams" : "{\"KeyA\": \"valueA\" , \"keyB\":\"valueB\"}" I want to remove the  double quotes before and after curly braces and the escape character . My challenges are that the sedcmd  works only if I apply it on the HF in props for mscs:azure:eventhub   and not in my new sourcetype  mscs:azure:eventhub:databricks and for some reason it brakes the format from the events with sourcetype mscs:azure:eventhub as well  which I don't want.   Any idea how to approach this issue ?