All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi ,   I am trying to parse the event log in to metric index by using props and transform conf file, but getting issue with my regex format.   issue : i am not able to extract fields at index tim... See more...
Hi ,   I am trying to parse the event log in to metric index by using props and transform conf file, but getting issue with my regex format.   issue : i am not able to extract fields at index time. Splunk Version : 7.1.x transform: [field_extraction] REGEX = .*[write]\_log\svalues\:(?<host>\w*.\.\w*\.\w*)\.(?<metric_name>[\w\.\-]*)\s*(?<_value>\d{1,4})\s*(?<id>\d*) FORMAT = host::$1 $2::$3 id::$4 WRITE_META = true [metric-schema:extract_metrics] METRIC-SCHEMA-MEASURES = _ALLNUMS_ METRIC-SCHEMA-WHITELIST-DIMS = host     Log: [2020-05-15 22:40:45] [info] write_log values:hostname.abcd.com.cpu-45.percent-nice 20 1648489   i want to filter logs out of millions of log with value write_log...where ever i find write_log that should be indexed.   my metrics :  Host - hostname.abc.com Metric name - cpu-x.percent-nice Metric value - 30 ID - 1648789   Any help is appreciated.   Thx
Hey! I trained a StateSpaceForecast algorithm and saved it with the fit command. My goal is now to make predictions on (near) real time data. My idea was to use the holdback variable which I would ... See more...
Hey! I trained a StateSpaceForecast algorithm and saved it with the fit command. My goal is now to make predictions on (near) real time data. My idea was to use the holdback variable which I would set to a positive value, and compare the prediction for (let's say) 10 minutes ago, with the actual value 10 minutes ago and if there is a difference, I sent an alert.  I want to do this with the apply command. I am unsure however, whether or not the apply command retrains my model (including new datapoints in the training) or whether it just applies the StateSpaceForecast with exactly the parameters specified in the previously done fitting?  Thanks!
Im converting all our dashboards over to scheduled searches load jobs for historic events and also accelerated data models.    With scheduled searches i can do the below which will bring back detai... See more...
Im converting all our dashboards over to scheduled searches load jobs for historic events and also accelerated data models.    With scheduled searches i can do the below which will bring back details of the load job: | loadjob savedsearch="admin:app:savedsearchname"   Can you do this with an accelerated data model or will it essentially wok in exactly the same way (presume not as pulling from indexers rather than search head)? 
I would like to check will there be any impact if i use inputs.conf to monitor those files (i.e. 1000+) that do not exist yet? This is to prepare for data on-boarding.   Will Splunk keep searching f... See more...
I would like to check will there be any impact if i use inputs.conf to monitor those files (i.e. 1000+) that do not exist yet? This is to prepare for data on-boarding.   Will Splunk keep searching for the files and write error message into splunk log, which result in performance issue or other issue? Thanks.
Hi All, I have a requirement to use foreach with search filter. Example fileds 192345_Employeestatus,207754_Employeestatus,158345_Employeestatus   | foreach *_Employeestatus [search <<MATCHSTR>>_... See more...
Hi All, I have a requirement to use foreach with search filter. Example fileds 192345_Employeestatus,207754_Employeestatus,158345_Employeestatus   | foreach *_Employeestatus [search <<MATCHSTR>>_Employeestatus='<<FIELD>>' (('<<FIELD>>'="") OR ('<<FIELD>>'="new") OR ('<<FIELD>>'="Working") OR ('<<FIELD>>'="exit") OR ('<<FIELD>>'="IND") OR ('<<FIELD>>'="Aus") OR ('<<FIELD>>'="relocated") OR ('<<FIELD>>'="yettojoin") OR ('<<FIELD>>'="Manager") OR ('<<FIELD>>'="AsstManager") OR ('<<FIELD>>'="SeniorAss")) ]   But search filter is not filtering the data as expected. Need your help .. Thanks in advance.. Learner ...
Hi, i have inherited a splunk installation, done by a 3rd party.  We are currently using Splunk Enterprise version 8.0.2, with universal forwarders on a Solaris host (11.3) and 4 solaris zones on th... See more...
Hi, i have inherited a splunk installation, done by a 3rd party.  We are currently using Splunk Enterprise version 8.0.2, with universal forwarders on a Solaris host (11.3) and 4 solaris zones on that host.   We are experiencing very high memory consumption and CPU usage on the host and respective zones, but a restart of the splunk daemon usually resolves  the memory issues.  We are currently restarting the splunk daemon's every 4-5 days. When we do restart the splunk services, they jump to the top CPU users the moment it's started. I have read that the high CPU could be attributed to the number of files/directories being monitored, so I ran the "splunk list monitor"  command on each zone being monitored and on the host, and found that certain directories were being monitored across all forwarders, even if those directories didn't exist on that zone. I still don't know enough about splunk (am working through a pluralsight splunk fundamentals training course) to know whether  the list of files/directories to be monitored is being set at a zone/machine level or globally, and where I can go to find out. Any assistance in this regard would be greatly appreciated. thanks Mel
When was the last time we ingested from this host? What is the average ingestion(GB) per sourcetype?
Hi, I have a problem whereby my events service health keeps turning to critical after awhile even when i restarted it. Whenever I restart the events service, it goes back to healthy but after awhile... See more...
Hi, I have a problem whereby my events service health keeps turning to critical after awhile even when i restarted it. Whenever I restart the events service, it goes back to healthy but after awhile, it jumps back to critical again. I went through the logs and saw multiple errors and is caused by: "Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{127.0.0.1}{127.0.0.1:9300}] at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:290) at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:207) at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:55) at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:288) at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359) at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:86) at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:56) at com.appdynamics.analytics.processor.event.meter.MetersStore.getUsage(MetersStore.java:207)" How do I resolve this? Thanks in advance
Hi, I want to mask just specific values.  This is an example of a json event return in splunk : {"MemorySize": 256, "region": "ca-central-1", "TracingConfig": \{"Mode": "PassThrough"\}, "Revisi... See more...
Hi, I want to mask just specific values.  This is an example of a json event return in splunk : {"MemorySize": 256, "region": "ca-central-1", "TracingConfig": \{"Mode": "PassThrough"\}, "RevisionId": "777", "Handler": "handleRequest", "Timeout": 600, "LastModified": "2020-05-27T14:05:43.839+0000", "Environment": \{"Variables": \{"ENVIRONMENT": "dev", "USER": "username",  "USERPASSD": "password", \}\}, "Role": "arn:aws:iam::666:role/X", "VpcConfig": \{"SubnetIds": ["subnet-000", "subnet-111"], "VpcId": "vpc-333", "SecurityGroupIds": ["sg-444"]\}, "CodeSize": 5555, "Description": "Lambda", "Runtime": "java11", "Version": "$LATEST"\}}   The problem is that sensitive data appear in clear specifically  in Environment>Variables In this section, we have variables : the data are not the same in each event, we can not create a regex with specific key name because it always changes.  How can I mask all values in the Environment>Variables WITHOUT masking the key ? Example of result I want : {"MemorySize": 256, "region": "ca-central-1", "TracingConfig": \{"Mode": "PassThrough"\}, "RevisionId": "777", "Handler": "handleRequest", "Timeout": 600, "LastModified": "2020-05-27T14:05:43.839+0000", "Environment": \{"Variables": \{"ENVIRONMENT": XXXXXX, "USER": XXXXXX,  "USERPASSD": XXXXXX, \}\}, "Role": "arn:aws:iam::666:role/X", "VpcConfig": \{"SubnetIds": ["subnet-000", "subnet-111"], "VpcId": "vpc-333", "SecurityGroupIds": ["sg-444"]\}, "CodeSize": 5555, "Description": "Lambda", "Runtime": "java11", "Version": "$LATEST"\}}   I tried a props.conf like that :  [sourcetype] INDEXED_EXTRACTION = json KV_MODE = none EXTRACT-var = \{\"Variables\"\:\s*\\\{(?<Variables>[^\}]+)\\ TRANSFORMS-anony = anony_raw   and a transforms.conf :  [anony_raw] REGEX = s/(\s*\"\s*[^\"]*\"[^\"]*\"([^\"]*)\s*\"\s*\,*)+ FORMAT = $1XXXXXX DEST_KEY =_meta SOURCE_KEY =_meta   But it doesn't work at all... Can you help me ?
Hey all, I am currently trying to achieve the following: train a Kalman filter with a periodicity i found via Autocorrelation on the last 3 weeks data and make prediction for one week of future dat... See more...
Hey all, I am currently trying to achieve the following: train a Kalman filter with a periodicity i found via Autocorrelation on the last 3 weeks data and make prediction for one week of future data. I do this as follows:    index = cisco_prod | timechart span=1h count as logins_hour | fit ACF logins_hour k=200 fft=true conf_interval=95 as corr | top limit=2 acf(corr),Lag | stats max(Lag) as corr_lag | map search="search index = cisco_prod | timechart span=1h count as logins_hour | predict \"logins_hour\" as prediction algorithm=LLP holdback=200 future_timespan=368 period=$corr_lag$ upper95=upper95 lower95=lower95" | `forecastviz(368, 200, "logins_hour", 95)`   But how do I now use this predictions for the coming week, to actually compare them to the incoming data? The thing is, I don't want to always train the Kalman filter with new data because if I feed it with anomalies it will not make correct predictions for the future.  Has anyone an idea?   
Hi, we have a requirement where we need to send data from kinesis streams to Splunk via firehose using hec tokens. In this use case I need to set-up infra in such a way that firehose should be conf... See more...
Hi, we have a requirement where we need to send data from kinesis streams to Splunk via firehose using hec tokens. In this use case I need to set-up infra in such a way that firehose should be configured with load balancer which is behind heavy forwarders (if one goes down other should be picked-up) -> which will then sends data to indexers. As hec tokens are created individually on heavy forwarders, if we configure one hec on firehose and load balacer url there it will not be working, what is the best way to configure load balancer for firehose along with the tokens generated (we dont have cluster at HF). referred below but they are not for firehose to HF load balancers  https://docs.aws.amazon.com/firehose/latest/dev/writing-with-kinesis-streams.html https://www.splunk.com/en_us/blog/cloud/power-data-ingestion-into-splunk-using-amazon-kinesis-data-firehose.html Let me know if you need more info.   Thanks
1.) How to create a custom indexed field  and configure.? 2.) Does it needs to link in the inputs.conf? if needs how can we provide linking between the both indexed fields  config files and inputs.c... See more...
1.) How to create a custom indexed field  and configure.? 2.) Does it needs to link in the inputs.conf? if needs how can we provide linking between the both indexed fields  config files and inputs.conf config file?   Best answers will be appreciated!!
Hello everyone, i am trying to achieve below logic | set a flag called <adminuser> if current user ID is present in lookup (in the lookup 4 name is there AAP1 APP2 AAP3) | if adminuser is False, t... See more...
Hello everyone, i am trying to achieve below logic | set a flag called <adminuser> if current user ID is present in lookup (in the lookup 4 name is there AAP1 APP2 AAP3) | if adminuser is False, then filter where Requestor in event is <current user ID> else do not filter |table "Requested Date" "ID" "Requestor" "MD" "SM" "SL" Status My XML code is   index=* sourcetype="testapp" |eval split=split(Requestor, "@"), Requestor=mvindex(split, 0) | eval "Requested Date" = strftime(_time,"%Y-%m-%d %H:%M:%S") | Get current user ID = (| rest /services/authentication/current-context splunk_server=local | rename username as Requestor |eval split=split(Requestor, "@"), Requestor=mvindex(split, 0)) | set a flag called <adminuser> if current user ID is present in lookup (in the lookup 4 name is there AAP1 APP2 AAP3) | if adminuser is False, then filter where Requestor in event is <current user ID> else do not filter |table "Requested Date" "ID" "Requestor" "MD" "SM" "SL" Status   @niketn  do you have any idea how to achieve  the above logic
Hi All, I suspect that Splunk DB connect is not configured properly because when i open the app it shows the following message. The Java Bridge server is not running. Additional Information OS w... See more...
Hi All, I suspect that Splunk DB connect is not configured properly because when i open the app it shows the following message. The Java Bridge server is not running. Additional Information OS we are running Splunk on is linux/RHEL 7.4 version of Java JRE are you running - I checked on my server using command rpm -q jre .it gives the following message package jre is not installed. Exact version of DB connect- I checked here $SPLUNK_HOME/etc/apps but couldn't find splunk_app_db_connect but we have installed Splunk DB Connect v1. Please help me to fix this issue. Regards, Rahul
i got results for my search like below. I got all counts by using eventstas. i need to show Total points by DisplayName wise group by level. if i use stats then i got Total but i need to kee all valu... See more...
i got results for my search like below. I got all counts by using eventstas. i need to show Total points by DisplayName wise group by level. if i use stats then i got Total but i need to kee all values and show only Displayname by group by level.   rank    DisplayName   Bcount  Lcount
Hi All, Just started working in splunk. We got a request from user that they want to monitor azure CDN and blob storage via splunk. Can anyone help me with this. Thanks in advance.
Hi All, We are using Splunk Cloud and looking to customize the PDF generated for our dashboards over mail. As of now we could see the default headers,footers and Splunk logo. But we would like to ch... See more...
Hi All, We are using Splunk Cloud and looking to customize the PDF generated for our dashboards over mail. As of now we could see the default headers,footers and Splunk logo. But we would like to change this for the dashboards under one application.I tried using advanced edit option for the _scheduled_view but it didn't work.We don't want to change these settings at system level as it will affect the PDFs for dashboards in other application. Is there anyway we can do this for dashboards in one particular application without changing the global settings?
I have a search query as sourcetype="file.csv"|eval Created_Date = mvindex(split(Created," "),0)| stats count as Issues_created by Created_Date which gives me output as  Similarly another sear... See more...
I have a search query as sourcetype="file.csv"|eval Created_Date = mvindex(split(Created," "),0)| stats count as Issues_created by Created_Date which gives me output as  Similarly another search query as  sourcetype="file.csv" Resolved|eval Created_Date = mvindex(split(Created," "),0)|stats count as Issues_Resolved by Created_Date basically i am filtering out events which have status as Resolved and creating  events which gives me output as  I want to combine these two queris to give me bar chart which will display the statistics as Created_Date ---- Issues_created ---- Issues_Resolved 01-01-2020 8 8 01-02-2020 9 0 01-03-2020 6 1   Kindly help me with this..
This is specifically about Palo Alto Traps (or as it's now called Cortex XDR Prevent) logs inside Splunk.  I am having a specific issue with elements of the Palo Alto Networks App dashboards showing ... See more...
This is specifically about Palo Alto Traps (or as it's now called Cortex XDR Prevent) logs inside Splunk.  I am having a specific issue with elements of the Palo Alto Networks App dashboards showing no data. I have Cortex XDR (Palo Alto's Cloud version of Traps EMS) sending data via TCP SSL to Splunk to a dedicated index and I see events. In the dashboard "Endpoint Operations", "Total Endpoints Reporting" is always 0, even though other elements of that same dashboard are showing data correctly.   When I look at the search "| tstats summariesonly=t values(log.content_version) AS log.content_version, values(log.type) AS log.type, values(log.severity) AS log.severity, values(log.dest_name) AS log.dest_name, values(log.src_host) AS log.src_host count FROM datamodel="pan_traps" WHERE nodename="log.operations" """" log.severity="*" GROUPBY _time log.log_subtype log.user | rename log.* AS * | dedup dest_name | stats dc(dest_name)"  Everything is great until the last dedup/dc part.  "dest_name" is always null for all of my values for some reason.  So this suggests that the data that Cortex XDR sends into Splunk does not have what the add-on expects. I'm curious if anyone has any experience with this and can advise a workaround or solution.
Hello Everyone, I have single source that needs to be monitored which existed in 10 different servers. Am planning to assign one sourcetype for every two  servers. Can some one tell how i can achiev... See more...
Hello Everyone, I have single source that needs to be monitored which existed in 10 different servers. Am planning to assign one sourcetype for every two  servers. Can some one tell how i can achieve this? Please let me know if you need more information. Thanks in advance.