All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I'm using the trial version  (60 days) of Splunk  Version:8.2.2.1 which I installed few days ago on my windows machine I changed the licensing to Heavy forwarder, but I try to a search I got... See more...
Hello, I'm using the trial version  (60 days) of Splunk  Version:8.2.2.1 which I installed few days ago on my windows machine I changed the licensing to Heavy forwarder, but I try to a search I got this error Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk license by visiting www.splunk.com/store or calling 866.GET.SPLUNK. My  APP Data is local performance monitoring ( Processor : Select counter C1 Time, User Time DPC Rate, and selected instances 0 1 2 3  total ) I can not go back the trial entreprise license I don't exceed the trial period. is it possible to rollback this configuration and go back to trial enterprise license ? Thanks
Hi guys... I have a splunk forwarder instance v8.2.1 on a AIX server. I have a custom app configured on which I am monitoring a few logs and forwarding them to an indexer.  I am having a weird probl... See more...
Hi guys... I have a splunk forwarder instance v8.2.1 on a AIX server. I have a custom app configured on which I am monitoring a few logs and forwarding them to an indexer.  I am having a weird problem where the forwarder stops sending data every day at 1 PM and resumes sending data feed at 1 AM. So, I would have no data consumed between 1 PM to 1AM. Any suggestions on what could be the issue ?  However, I am also forwarding splunkd.log to the same indexers and I see that log data all thru the day. The issue I am facing is only with one of the custom app I have on this instance.  I am sharing inputs.conf and props.conf entries  ========== inputs.conf ========= [monitor:///log/mycustomereport/mycustomereport.log*] disabled = false followTail = 0 sourcetype =mycustomereport blacklist = \.gz index = 20000_java_app_idx ignoreOlderThan=2h ========== props.conf ========= [mycustomereport] TIME_PREFIX=\w+\| TIME_FORMAT=%m/%d/%Y %I:%M:%S %3Q %p TRUNCATE = 0 MAX_EVENTS = 10000 SHOULD_LINEMERGE = false KV_MODE = none LINE_BREAKER = ([\n\r]+)mycustomereport MAX_TIMESTAMP_LOOKAHEAD = 40 PS: I do see that log file I am monitoring is having data written to it consistently.  I did enable debug logs... i dont see anything written which could helped me understand the issue. I also dont see any crash file generated. 
Pretty much the title. I have created alerts using IT Essentials Learn app. The alert is running because I receive alerts to slack. However, I cannot figure out where the alert is housed so I am unab... See more...
Pretty much the title. I have created alerts using IT Essentials Learn app. The alert is running because I receive alerts to slack. However, I cannot figure out where the alert is housed so I am unable to return to the edit screen and modify the alert. I've looked through both IT apps as well as the Search and Reporting app alerts panel. I cannot find the alerts anywhere. Where are they housed?
The error says "Threat list download from  https://raw.githubusercontent.com/mitre/cti/master/enterprise-attack/enterprise-attack.json Can not be downloaded. I have contacted the vendor of the App.... See more...
The error says "Threat list download from  https://raw.githubusercontent.com/mitre/cti/master/enterprise-attack/enterprise-attack.json Can not be downloaded. I have contacted the vendor of the App. few times, No go! Please advise.  
Hi There,   I have two queries [Query 1  and Query 2].  what i am planning to achieve is that when user clicks on the server_ID for tabular output of Query 1, then it should be passed as INPUT to t... See more...
Hi There,   I have two queries [Query 1  and Query 2].  what i am planning to achieve is that when user clicks on the server_ID for tabular output of Query 1, then it should be passed as INPUT to the WHERE clause in Query 2  . Any help would be appreciated.     Query 1: index=<<index_name>>   sourcetype=webserver | dedup server_ID | table  server_ID   Query 1 Output: server_ID 49552473-567 d5eedf55-dca 5d4bb774-74a 03f03042-1f7   Query 2:   index=<< index_name>>   "Exception" | where  server_ID= "server_ID from Query1 table"     Thank You
I have two searches with  three fields in common but two field that differ. I'm trying to find returns  that don't have a matching sale for that company_name, mid, and card_number. The return and sal... See more...
I have two searches with  three fields in common but two field that differ. I'm trying to find returns  that don't have a matching sale for that company_name, mid, and card_number. The return and sales fields are both dollar amounts.  "total" is the dollar amount of the transaction, return or sale  index=X sourcetype=Y  earliest=-1d@d latest=@d | where transaction_type= "refund" | stats values(total) as returns by company_name, mid, card_number | append [  search index=X sourcetype=Y earliest=-30d@d latest=@d    |  where transaction_type= "sale"    | stats values(total) as sales by company_name, mid, card_number ] Currently I have this search that pulls all return transactions from the past day, as well as every sale from the past month, the results look like this           company.      MID.       card num.        returns   sales +-------------------------------------------------------------------+  | company A | 1234 | 1234***7890 |  50.00 |                  |  | company B | 1254 | 1234***1234 |  80.00 |                  |  | company C | 1236 | 1234***1230 |  75.00 |                  |  | company A | 1234 | 1234***7890 |               | 50.00     |  | company B | 1254 | 1234***1234 |               | 30.00     |  | company C | 1236 | 1234***1230 |               | 75.00     | You can see company B has refunded the card number 1234***1234 for the amount of $80.00,  but there was not a sale to that card in that amount. I would like my search to exclusively display the rows (with the return amount only, not any sale numbers) where this happens. so Ideally the search would have returned just one row:    company.      MID.       card num.        returns  +------------------------------------------------------+  | company B | 1254 | 1234***1234 |  80.00 |                  
Anyone noticed that there is a big difference between what the MC displays in for the hot volume under indexers->indexes and volumes->volume Detail and what the OS reports??  I have our indexers show... See more...
Anyone noticed that there is a big difference between what the MC displays in for the hot volume under indexers->indexes and volumes->volume Detail and what the OS reports??  I have our indexers showing that between 11-12 TBs are being used up for the hot volume, while the MC is reporting that they are not even at 10 TBs.  That seems like a really big difference to me...  I can deal with checking each box, but I like have the MC reporting where I can see all the indexers in one panel.  Thanks!
Our organization has been using the AppDynamics Java agent with IBM Websphere for a number of years. This is our first attempt to get the Java agent working with Tomcat. When the Tomcat application s... See more...
Our organization has been using the AppDynamics Java agent with IBM Websphere for a number of years. This is our first attempt to get the Java agent working with Tomcat. When the Tomcat application server starts up, we get Connection Refused error messages in the java agent logs. Stack Trace Error Message: org.apache.http.conn.HttpHostConnectException: Connect to server.domain.name:443 [server.domain.name/<IP Address>] failed: Connection refused (Connection refused)         at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) ~[httpclient-4.5.13.jar:4.5.13]         at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) ~[httpclient-4.5.13.jar:4.5.13]         at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) ~[httpclient-4.5.13.jar:4.5.13]         at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) ~[httpclient-4.5.13.jar:4.5.13]         at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) ~[httpclient-4.5.13.jar:4.5.13]         at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) ~[httpclient-4.5.13.jar:4.5.13] Configuration information:       -Dappdynamics.controller.hostName=server.domain.name \       -Dappdynamics.controller.port=443 \       -Dappdynamics.agent.applicationName=APP-NAME \       -Dappdynamics.agent.tierName=Services \       -Dappdynamics.agent.nodeName=APP1_Node1 \       -Dappdynamics.agent.accountName=customer1 \       -Dappdynamics.bciengine.should.implement.new.interfaces=false \       -Dappdynamics.agent.accountAccessKey=<account number>\ What kind of things could cause a connection refused with the Java agent running in Tomcat? What should I be looking for in the logs? Is there a known difference in connection behaviour between Websphere and Tomcat? Any help would be appreciated. Thanks! Dale Chapman
Hi.  I am trying to set up alerts to notify when the response time is greater than 1000 milli seconds. The alert has to search for every minute or for every 5 minutes.  Below is the query which I h... See more...
Hi.  I am trying to set up alerts to notify when the response time is greater than 1000 milli seconds. The alert has to search for every minute or for every 5 minutes.  Below is the query which I have used. index=testIndex sourcetype=testSourceType basicQuery | where executionTime>1000 | stats count by app_name, executionTime After running the query by setting it for "Last 5 minutes" in the dropdown beside search icon, I am getting results. Then I have saved the query as an Alert  with time range set to "last 5 minutes" and Cron Expression set to "*/1 * * * *" to run it for every 1 minute in the last 5 minutes.  Is this a correct approach ? Main point is: I don't want to miss any events with response time more than 1000msec.    Also, what is the difference between setting time in dropdown and  earliest=-5m latest=now ?  Can someone please help me ?    Thanks in Advance. 
Need some assistance from the experts. I have two queries below which I would like to merge on id. Query 1 index=aws sourcetype=aws:cloudtrail eventName=RebootInstances | table _time userName sour... See more...
Need some assistance from the experts. I have two queries below which I would like to merge on id. Query 1 index=aws sourcetype=aws:cloudtrail eventName=RebootInstances | table _time userName sourceIPAddress requestParameters.instancesSet.items{}.instanceId | rename requestParameters.instancesSet.items{}.instanceId as id Query 2 index=aws sourcetype=aws:description source="us-east-2:ec2_instances" | table id private_ip_address   I would like the final table fields to be: time  userName  sourceIPAddress    id   private_ip_address   Any assistance given will be appreciated.
Hello, We are going to have several OpenShift 4 clusters  running CoreOS, therefore without having the possibility to install a standard version of the Splunk Universal Forwarder. Do you think is p... See more...
Hello, We are going to have several OpenShift 4 clusters  running CoreOS, therefore without having the possibility to install a standard version of the Splunk Universal Forwarder. Do you think is possible, installing a containerized version of the Splunk Universal Forwarder on each OpenShift node, to let it connect to a Splunk Deployment server (not containerized) in order to control them centrally? Thanks a lot, Edoardo
I'm trying to display a total count for each value found in attributes.eventtype field and group them by the attributes.campaignname field. I'm display these stats from 2 specified values in attribut... See more...
I'm trying to display a total count for each value found in attributes.eventtype field and group them by the attributes.campaignname field. I'm display these stats from 2 specified values in attributes.campaignname:   index=mail sourcetype="phish-campaign-logs" attributes.campaignname="Undelivered Phishing Campaign - FY21Q2 - 062421" OR attributes.campaignname="O365 Re-authentication - FY21Q3" | spath output=eventtype attributes.eventtype | dedup id | stats count(eval(eventtype="Data Submission")) AS Data_Submission, count(eval(eventtype="Email Click")) AS Email_Click, count(eval(eventtype="Email View")) AS Email_View, , count(eval(eventtype="No Action")) AS No_Action, count(eval(eventtype="TM Complete")) AS TM_Complete, count(eval(eventtype="TM Sent")) AS TM_Sent BY attributes.campaignname | addtotals   When running the search, I'm receiving smaller counts on each of the values in eventtype for one of the campaigns specified, "Undelivered Phishing Campaign - FY21Q2 - 062421"   If I only specify this campaign in my search, I'm able to receive back the expected total count on each of the values:      index=mail sourcetype="phish-campaign-logs" attributes.campaignname="Undelivered Phishing Campaign - FY21Q2 - 062421" | spath output=eventtype attributes.eventtype | dedup id | stats count(eval(eventtype="Data Submission")) AS Data_Submission, count(eval(eventtype="Email Click")) AS Email_Click, count(eval(eventtype="Email View")) AS Email_View, , count(eval(eventtype="No Action")) AS No_Action, count(eval(eventtype="TM Complete")) AS TM_Complete, count(eval(eventtype="TM Sent")) AS TM_Sent BY attributes.campaignname | addtotals   Please help me to make this search working properly. Thank you in advance.
  I'm doing a query to return the text part of the log, but when using it on my dashboard it gives this error message: Value node <query> is not supposed to have children   my qu... See more...
  I'm doing a query to return the text part of the log, but when using it on my dashboard it gives this error message: Value node <query> is not supposed to have children   my query: index=... user Passed-Authentication earliest=@d | rex field=_raw "mdm-tlv=ac-user-agent=(?<message>.*?)," | table message   My dashboard: <panel> <single> <title>Meu titulo</title> <search> <query>index=... user Passed-Authentication earliest=@d | rex field=_raw "mdm-tlv=ac-user-agent=(?<message/>.*?)," | table message </query> </search> <option name="height">96</option> </single> </panel>  I believe the error is due to <message>, but I'm new to splunk  
How to integrate / connect macro to data model or CIM data model 
Hello all,  Did someone know the definition of  "rest.simpleRequest" function ? I'm trying to find how it works when we use it like this.  rest.simpleRequest(url, postargs=postargs, sessionKey=ke... See more...
Hello all,  Did someone know the definition of  "rest.simpleRequest" function ? I'm trying to find how it works when we use it like this.  rest.simpleRequest(url, postargs=postargs, sessionKey=key, raiseAllErrors=True)   thank y'all
I am collecting Firewall logs using OPSEC Lea app.  This add on is setup on Heavyforwarder.  App is setup correctly and logs are coming onto HF but I am unable to view them on Search head.  HF has co... See more...
I am collecting Firewall logs using OPSEC Lea app.  This add on is setup on Heavyforwarder.  App is setup correctly and logs are coming onto HF but I am unable to view them on Search head.  HF has connectivity to indexers is sending internal logs and other application related to indexers.  How do I check what is wrong with Checkpoint logs ?
In the latest Splunk Security Essentials 3.4.0, and previous release the Data Inventory detection in CIM+Event Size Introspection starts a query that will never complete due to an unmatched paranthes... See more...
In the latest Splunk Security Essentials 3.4.0, and previous release the Data Inventory detection in CIM+Event Size Introspection starts a query that will never complete due to an unmatched paranthesis.    The query is autogenerated, so I'm not sure if this is due to a misconfiguration on my part, or perhaps just a unwanted feature.   (index=main source=WinEventLog:Security) ) OR (index=main source=WinEventLog:Security ) | head 10000 | eval SSELENGTH = len(_raw) | eventstats range(_time) as SSETIMERANGE | fields SSELENGTH SSETIMERANGE tag | fieldsummary
Please confirm something or correct me. If I understand correctly, it's the event's _time that's the basis for bucket ageing (hot->warm->cold(->frozen)), right? I understand that it's typically des... See more...
Please confirm something or correct me. If I understand correctly, it's the event's _time that's the basis for bucket ageing (hot->warm->cold(->frozen)), right? I understand that it's typically designed this way for collecting event which have monotonicaly "growing" time. But what would happen if my source (regardless of the reason) generated events with a "random" timestamp? One could be from an old past (several years, maybe?), another from the future and so on. Would it mean that I'd have a chance to roll the buckets after just one or two events because I'd have sufficiently old events or sufficiently  big timespan in case of hot buckets?
Why doesn't threathunting index receive mapped data from sysmon (windows index)? By the way, I edited  the macro's to suit my environment but it still didn't work.
Hi I have two field on my logfile <servername> <CLOSESESSION> need to know when CLOSESESSION is 0 each day by servername. everyday I expect CLOSESESSION appear on my server logs, if one or more ser... See more...
Hi I have two field on my logfile <servername> <CLOSESESSION> need to know when CLOSESESSION is 0 each day by servername. everyday I expect CLOSESESSION appear on my server logs, if one or more server has no CLOSESESSION it means something going wrong. here is the spl: index="my_index" | rex field=source "(?<servername>\w+)." | rex "CLOSESESSION\:\s+(?<CLOSESESSION>\w+)" | table _time servername CLOSESESSION   Expected output: Servername     cause Server10           NOCLOSESESSION Server15            NOCLOSESESSION   any idea? Thanks,