All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am trying to use a macro inside a macro validation expression. This is because I plan to make a number of similar macro's which all share some inputs, which need to be validated.  However whe... See more...
Hi, I am trying to use a macro inside a macro validation expression. This is because I plan to make a number of similar macro's which all share some inputs, which need to be validated.  However when I run the macro I get the following error:   The validation expression is invalid: 'The expression is malformed. An unexpected character is reached at '`names`)...   This indicates to me that macro's are not allowed to be used inside validation expressions. Here is a minimum example: Create a macro 'names', which just contains a list of allowed values for a name (e.g. "john", "hank", "pete") Create a macro 'test(1)' that takes one argument, 'name' In the validation expression of 'test', validate the 'name' argument by checking if it occurs in the list:   in("$name$", `names`)​   Now run the 'test' macro in a search, this will give the aforementioned error. I was wondering if this is intended behavior or maybe a bug or wrong setting in my companies Splunk installation? Thanks, Yolan. Edit: I created an Idea to add this functionality, please vote on it if you think it is a good idea, thanks! Allow use of macros in macro validation expressions | Ideas (splunk.com)
Is It possible to apply different color visualization for both "Yes" and "No" graph having same field "count"? I tried using the following code, but It didn't change. <option name="charting... See more...
Is It possible to apply different color visualization for both "Yes" and "No" graph having same field "count"? I tried using the following code, but It didn't change. <option name="charting.fieldColors">  {​​​​​​​​"count:Yes": #0000FF , "count:No": #FFA500}​​​​​​​​ </option>  
I want the values of TID_now and TID_7 days ago in my table   I tried  | eval TID_7days=TID(now(), "-7d@d") it says expression is malformed.  
Hello I am unable to find apps on Splunk web that I am getting below error, My system configured a internet through a proxy ip.  I am accessing a URL @ HTTP://10.x.x..x:8000/    Error resolving Nam... See more...
Hello I am unable to find apps on Splunk web that I am getting below error, My system configured a internet through a proxy ip.  I am accessing a URL @ HTTP://10.x.x..x:8000/    Error resolving Name or service unknown
hi In my index i have added this things  [ind1] homePath= $SPLUNK_DB/ind1/db coldPath= $SPLUNK_DB/ind1/colddb thawedPath= $SPLUNK_DB/ind1/thaweddb maxHotBuckets=10 maxDataSize=10000 maxWarmDBC... See more...
hi In my index i have added this things  [ind1] homePath= $SPLUNK_DB/ind1/db coldPath= $SPLUNK_DB/ind1/colddb thawedPath= $SPLUNK_DB/ind1/thaweddb maxHotBuckets=10 maxDataSize=10000 maxWarmDBCount=300 maxTotalDataSizeMB=200000 frozenTimePeriodInSecs=31536000  coldToFrozenDir=$SPLUNK_DB/ ind1/frozendb   that means after 1 year data will be deleted ?? if i ll not add volume for this it ll affect my index or what??? please help on this
Hello Splunk Gurus, I would like to understand if Splunk has solved this problem about auto-scaling Splunk Indexer-Cluster depending upon the incoming data-volume in AWS via tools like K8s or Terr... See more...
Hello Splunk Gurus, I would like to understand if Splunk has solved this problem about auto-scaling Splunk Indexer-Cluster depending upon the incoming data-volume in AWS via tools like K8s or Terraform or any-other? What the problem statement states is – Spin more indexing nodes as the data volume increases, automatically Provision an AWS instance with Splunk image Mount the data volume Add the indexer into existing cluster as a peer to store and replicate the buckets Remove indexing  nodes as data volume decreases, automatically Inform the Cluster Master about scaling down Remove the indexer(s) from the cluster Unmount the data volume and free-up the disk space back to AWS De-commission the AWS instances Making sure the data is fully available and searchable during this process Purpose of this exercise is - To save the AWS cost since its pay-as-you-use model and if, on the day of less incoming data, few of the indexing nodes can be shut-down since they are mostly underutilized on such days due to less search activities and less indexing data. My biggest concern about auto-scaling is - the fact that buckets are replicated randomly on all the indexers of the cluster, and if on a certain day when there is less data incoming, let's say over the weekends, if n indexer nodes can be shut-down to save cost, data is not completely available. And with SF=2, RF=2, if Cluster is recovered to its full-state with n nodes being shut-down, On Monday there will be so many excessive buckets with those node again becoming part of the cluster to handle the working week-day traffic. Answers I seek - I would like to know the insights about this problem-solving in terms of approach and strategy if someone and/or Splunk has solved it with their Splunk Cloud offering. I would also like to understand and have assessment inputs from the community and Splunk Gurus / Architects if its really a worthy problem to solve or if it makes sense at all, it may be an absurd idea and I am fine learning it.   Thanks!
Hi Team, We have configured dot net agent and it got configured and working properly. For custom rule and MIDC configuration we are trying to access live preview but it not reporting any data. I h... See more...
Hi Team, We have configured dot net agent and it got configured and working properly. For custom rule and MIDC configuration we are trying to access live preview but it not reporting any data. I have tried with specific class method & with all classes as well but in both the situation its not populating any information. Do we need to enable any specific flag to get the data on live preview for dotnet if yes can you please help me with the details. Thanks Mohit Jain
Though I've completed my lab exercise it is still showing IN progress. I'm unable to get certificate due to this. Please help.
The splunk query below is only showing just one line of Metric_ID which starts at 1. I need help with the splunk query that we show all the  68 lines of Metric_ID starting from 1.    index=security... See more...
The splunk query below is only showing just one line of Metric_ID which starts at 1. I need help with the splunk query that we show all the  68 lines of Metric_ID starting from 1.    index=security sourcetype="Computers" "Computer Status"=Enabled | bin _time span=1day | dedup _time sAMAccountName | timechart span=1day count |search count > 0 | stats avg(count) AS avg stdev(count) AS stdev min(count) AS min max(count) AS max latest(count) AS latest_count | eval min_thres=5000, max_thres=7500 | eval alert=if((latest_count<min_thres OR latest_count>max_thres), 1, 0) | eval Metric_ID="1" | lookup  free_metrics.csv Metric_ID output Data_Item_volatility, Metric_ID, Metric_Name
Proof point email security app TA-Proofpoint-TAP stopped ingesting log. I get the below SSL error message in the splunkd.log. I have installed all the necessary certificate still I get this below err... See more...
Proof point email security app TA-Proofpoint-TAP stopped ingesting log. I get the below SSL error message in the splunkd.log. I have installed all the necessary certificate still I get this below error. I tried with with curl command it is connecting and returning log. But when this app connect using proofpoint_tap_siem.py I get this below error. I even disabled the SSL verifcation by seeing this in script   VALIDATE_SSL = False in proofpoint_tap_siem.py line 43. But still the same error. I use Splunk 7.5.3 on readhat Linux and python 2.7 and TA-Proofpoint-TAP version 2.0   12-29-2020 16:05:58.949 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-Proofpoint-TAP/bin/proofpoint_tap_siem.py" proofpoint_tap_siem://proofpoint_tap_siem: query_and_save/Could not query TAP URL- https://tap-api-v2.proofpoint.com/v2/siem/all?format=json&threatStatus=falsePositive&threatStatus=active&threatStatus=cleared&interval=2020-12-29T09%3A21%3A41Z%2F2020-12-29T10%3A21%3A40Z ([SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:742))
Good day everyone, Ran into following problem, The query index=source | eval time=strftime(_time, "%+) |stats max(time) values(from) as Sender, values(rcpt) as Recipients, value(subject) as S... See more...
Good day everyone, Ran into following problem, The query index=source | eval time=strftime(_time, "%+) |stats max(time) values(from) as Sender, values(rcpt) as Recipients, value(subject) as Subject values(hops_ip) as SenderIP values (ref) as Reference by ref |where like(senderIP, "10.%)   Not sure where went wrong, senderIP which is not 10.% is still showing. I did noticed that the ref value appears multiple times for different transaction, that could be the cause? Happy new year in advance!
My goal is to make a report that has running total (cumulative) data across years. Current year data is queried from Splunk while prior year data is all housed in a lookup (called TY19_Splunk_total_d... See more...
My goal is to make a report that has running total (cumulative) data across years. Current year data is queried from Splunk while prior year data is all housed in a lookup (called TY19_Splunk_total_data.csv). My issue is that this report will be on a dashboard that has date range selectors. When the date range is selected, the streamstats works correctly for current year data (since it isolates the data from dates selected in the range THEN adds) but not for prior year data because I don't know how to restrict data in the inputlookup by "date" + 1 yr while at the same time, having the tokens apply to my base splunk search. Hopefully that makes sense... here's the query I'm working with     [base query] year=<current_year>     | timechart span=1d dc(intuit_tid) as current_year_data     | streamstats sum(current_year_data) as current_year_data     | eval time=strftime(_time,"%m-%d")     | join time     [| inputlookup TY19_Splunk_total_data.csv     | eval token_time=relative_time(strptime(time,"%m/%d/%Y"),"+1y")     | where capability="W2" and token_time>=$time.earliest$ and token_time<$time.latest$     | eval time=strftime(strptime(time,"%m/%d/%Y"),"%m-%d")     | stats sum(attempts) as prior_year_data by time     | streamstats sum(prior_year_data ) as prior_year_data     | fields time prior_year_data ]     | fields time current_year_data prior_year_data     | fields - _time
I'm using this endpoint to run a search and return the SID using Denodo (data virtualization) to make the connection and call.  https://xxxxx.splunk.com:8089/services/search/jobsrun The first time ... See more...
I'm using this endpoint to run a search and return the SID using Denodo (data virtualization) to make the connection and call.  https://xxxxx.splunk.com:8089/services/search/jobsrun The first time I run the call, it returns with a connection reset error. Then when I re-run, it returns the SID successfully. I'm able to run it again and get a SID back. It's only after I was a few hours and try to execute the call that I get the connection reset error. Do you know what might be causing this, and how I might overcome the error on the first run.
See the example values below. How do I convert the value of the version field, so that they have the same number of decimals? version=7.3.2 version=8.0.2.1 I would like this to be converted to: v... See more...
See the example values below. How do I convert the value of the version field, so that they have the same number of decimals? version=7.3.2 version=8.0.2.1 I would like this to be converted to: version=7.3.2.0 version=8.0.2.1
I need a query to find Memory usage more than 90 percent by hostname is it a good idea to do in splunk vs app dynamics
Hello Splunkers,   We have a new correlation search deployed to ingest a 3rd party (logicmonitor) system's alerts.  The long/short is, we have a script polling their API every minute and writing th... See more...
Hello Splunkers,   We have a new correlation search deployed to ingest a 3rd party (logicmonitor) system's alerts.  The long/short is, we have a script polling their API every minute and writing the events in JSON to an index.  Each alert is an individual event and they all have unique IDs.   The correlation search is super simple mapping severity and itsi_eventtype.   The following details are configured in the search: Entity Lookup Field = ObjectName (this is a field with the name of the object creating the alert, this maps to entities in ITSI and usually a hostname or IP) Notable Event Title = %TemplateName% (this is the field in the data showing the common rule that's triggered and how we group the events) Severity = %severity% (mapped using eval) We then have an aggregation policy which matches the itsi_eventtype which is created using eval and then splits the notables by TemplateName field. Everything is working, execpt the episodes / notable titles are duplicated (e.g. SQL Performance Alert SQL Performance Alert.  Where the TemplateName value would be "SQL Performance Alert").  This happens also for the impacted entities, event timeline event type, etc.)   Not sure what's going on here, it seems only to impact this one correlation search, the only other correlation search comes from the kubernetes content pack and it works fine.   Thanks for any help!!      
How do I convert the following string value to a numerical value that represents two digits between the dots? version = 7.10.23.1 I would like for this to be converted to a numeric value: version ... See more...
How do I convert the following string value to a numerical value that represents two digits between the dots? version = 7.10.23.1 I would like for this to be converted to a numeric value: version = 07102301 The zero before the 7, does not need to show. I just need to be able to perform <> conditional or comparison functions. I appreciate your help.   
Hello Everyone: We are currently running an app which requires Oracle DB from which we are extracting data with Splunk Heavy Forwarder.    It is working fine - but as always, looking for ways to cut... See more...
Hello Everyone: We are currently running an app which requires Oracle DB from which we are extracting data with Splunk Heavy Forwarder.    It is working fine - but as always, looking for ways to cut back on licenses.  The splunk_app_db_connect houses the jdbc connector which we need.  I have a couple of questions about this adapter/connector: 1. Is that an actual licenseable Oracle product?  (It seems to be - rather than an "generic adapter that works with Oracle DB)? 2. Given the first question, are there any options to connect to Oracle RDS from Splunk HF using a non-Oracle based JDBC software? Thanks Bill
Hello All,   Running Splunk v7, Splunk Add-on for Oracle Database v3.7.0, InfoSec App for Splunk 1.5.3. I was able to get a Universal Forwarder installed on a staging DB server and we now have the ... See more...
Hello All,   Running Splunk v7, Splunk Add-on for Oracle Database v3.7.0, InfoSec App for Splunk 1.5.3. I was able to get a Universal Forwarder installed on a staging DB server and we now have the Audit logs flowing in and they seem to be getting parsed appropriately; i.e. fields look like they're being extracted correctly, etc.   The issue I seem to be running into is that we're not getting the individual users that are supposedly logging in, the database_user field shows up as just a slash, "/", or user is always SYSDBA. Can anyone who has this app setup and working well comment on how you got it setup to monitor authentications? We do want to track changes, as well, but we want to be able to watch login success and failure first.
Hello Splunk Forum TEAM,   I have a question refered to the integration because right now I receive the information whitout problems but when I try to check in in a search I can´t find any log.   ... See more...
Hello Splunk Forum TEAM,   I have a question refered to the integration because right now I receive the information whitout problems but when I try to check in in a search I can´t find any log.   Here is where we use the scripts for pull data and delete after 30 days. ----------------------------------------------------------------------------------------------------------------------------------- -5. In $SPLUNK_HOME/etc/apps/TA-cisco_umbrella/local/inputs.conf create the following stanzas. Make sure you change the path and index in the monitor stanza if necessary! [script://./bin/pull-umbrella-logs.sh] disabled = 0 interval = 300 index = _internal sourcetype = cisco:umbrella:input start_by_shell = false [script://./bin/delete-old-umbrella-logs.sh] disabled = 0 interval = 600 index = _internal sourcetype = cisco:umbrella:cleanup start_by_shell = false [monitor:///opt/splunk/etc/apps/TA-cisco_umbrella/data/dnslogs/*/*.csv.gz] disabled = 0 index = opendns sourcetype = opendns:dnslogs -6. Verify data is coming in and you are seeing the proper field extractions by searching the data. ----Example Search: index=awsindexyouchose sourcetype=opendns:dnslogs ----Note: You can look for script output by searching: index=_internal sourcetype=cisco:umbrella* --------------------------------------------------------------------------------------------------------------------------------------- But when I try to do the next search: index=_internal sourcetype=cisco:umbrella* I dont retrive data.