All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I need some advice what we should use to get the logging to splunk  working for "Microsoft Defender for Identity". There is so many diffrent apps on splunkbase with diffrent qualities and ... See more...
Hi all, I need some advice what we should use to get the logging to splunk  working for "Microsoft Defender for Identity". There is so many diffrent apps on splunkbase with diffrent qualities and I know that MS have rename the service. Any suggestion? //Jan
Hi, we have a cluster environment with 6 indexers. Each host has 128GB Ram, but as I see Splunk using only ~4GB. Is there any chance to optimize (speedup) memory usage and let splunk to use for exa... See more...
Hi, we have a cluster environment with 6 indexers. Each host has 128GB Ram, but as I see Splunk using only ~4GB. Is there any chance to optimize (speedup) memory usage and let splunk to use for example 100GB of RAM ?   I have tons of different indexers with different dashboards .... If yes, how to do it ?     Cheers Konrad
Hi How can I find events that not occurred daily? Here is the scenario  I have two field on my logfile <servername> <CLOSESESSION> need to know when CLOSESESSION is 0 each day by servername. every... See more...
Hi How can I find events that not occurred daily? Here is the scenario  I have two field on my logfile <servername> <CLOSESESSION> need to know when CLOSESESSION is 0 each day by servername. everyday I expect CLOSESESSION appear on my server logs, if one or more server has no CLOSESESSION it means something going wrong. need two search here, first extract all server names from file name that exist in path from metadata for faster result, then in second query check which one has not CLOSESESSION FYI: I don’t like to use lookup in csv file for first step, prefer do it with multi search and join table. something like this: 1- first search return list of all log files exist (per server) | metadata type=sources index=my_index | table source 2-second search filter lines contain CLOSESESSION index="my_index" | search CLOSESESSION | rex extracted server names of field "source" from STEP 1 | rex extract count of CLOSESESSION  join them and just show those hasn’t CLOSESESSION   here is the logs: servernames not exist in log, extract from log file name, i put it in log with different color for clear the main goal) 23:54:00.957 app server 1 module: CLOSESESSION 23:54:00.958 app server 3 module: CLOSESESSION 23:54:00.959 app server 4 module: CLOSESESSION   Expected output step 1: servernames server 1 server 2 server 3 server 4   Expected output step 2: Servername     cause Server2               NOCLOSESESSION
I am getting this error in deployment server 8.0.9 and how to make this right : ./splunk reload deploy-server -class (i gave my serverclass name here) An error occurred: Could not create Splunk set... See more...
I am getting this error in deployment server 8.0.9 and how to make this right : ./splunk reload deploy-server -class (i gave my serverclass name here) An error occurred: Could not create Splunk settings directory at '/root/.splunk'.  
Hello, I read my data with the inputlookup command and try to count the different occurrences of the field fields.SID as below:         | makeresults | eval time=relative_time(now(),"-24h") | e... See more...
Hello, I read my data with the inputlookup command and try to count the different occurrences of the field fields.SID as below:         | makeresults | eval time=relative_time(now(),"-24h") | eval time=ceil(time) | table time | map [ |inputlookup incidents where alert_time > $time$ ] | join incident_id [ |inputlookup incident_results ] | fields fields.SID | search fields.SID=* | mvexpand fields.SID           Unfortunately, whatever tricks I do I am always getting several SIDs packed into a single event, see the screenshot below.  How would I split it the way to have each fields.SID in separate row to be able to count it? Kind Regards, Kamil
Hi, We have status in one log type, where we would like to track if account is in state: bypassed Example: 2021-13-10 user1 bypassed 2021-13-10 user2 enabled 2021-13-09 user2 bypassed 2021-13... See more...
Hi, We have status in one log type, where we would like to track if account is in state: bypassed Example: 2021-13-10 user1 bypassed 2021-13-10 user2 enabled 2021-13-09 user2 bypassed 2021-13-08 user3 bypassed 2021-13-08 user3 active 2021-13-08 user3 bypassed 2021-13-07 user3 active how can we find last 2 status for user in period of time and than based on last bypass/active status we get only accounts that have still active bypass status?  
Hi All, I'm trying to create a search, to potentially be made into a monitoring rule later on. What I am trying to achieve is a way to compare if a user has logged into his machine from a wildly di... See more...
Hi All, I'm trying to create a search, to potentially be made into a monitoring rule later on. What I am trying to achieve is a way to compare if a user has logged into his machine from a wildly different IP address.  This will be using external IP addresses only. As an example I want to know if a user logged into the estate from an IP which wasn't the same or similar as the previous day.   User Today Yesterday User A 155.123.1.1 155.123.1.1 User B 155.124.1.2 155.125.20.2 User C 155.166.2.5 22.18.254.56   In the table able, I have 3 users, user A and B have logged into pretty similar IP's although user B has logged in from a different one today ( this often happens in our logs ).  What I am more wanting to see is User C, who has logged into from a completely subnet IP and is not similar to their IP from the previous day.  This is what I have so far:   index=foo (earliest=-1d@d latest=now()) | eval TempClientIP=split(ForwardedClientIpAddress,",") | eval ClientIP=mvindex(TempClientIP,0) | eval ClientIP1=mvindex(TempClientIP,1) | eval ClientIP2=mvindex(TempClientIP,2) | search NOT ClientIP=10.* | where LIKE("ClientIP","ClientIP") | eval when=if(_time<=relative_time(now(), "@d"), "Yesterday", "Today") | chart values(ClientIP) over user by when | where Yesterday!=Today     Some context regarding the search the ForwardedClientIpAddress field has 3 items inside, ClientIP + ClientIP1 are the same address, ClientIP2 is the end internal address. ClientIP can be an internal address, which is why there is a NOT to remove it from the searches.   Any help would be very much appreciated.    Thanks
Hi Guys, We have a requirement where we need to index emails  to be ingested into splunk. I know a couple of apps are out there but I could not get them working...also not sure how to setup/request ... See more...
Hi Guys, We have a requirement where we need to index emails  to be ingested into splunk. I know a couple of apps are out there but I could not get them working...also not sure how to setup/request a mail account for splunk specifically for this purpose like what all settings should be applied etc.  I am a novice as far as mail settings are concerned, so can someone take some time and help me out here and be as detailed as possible...We are using Splunk 8.0.0   Thanks, Neerav
Hi All, I have created a bar chart on my dashboard with the count of Exceptions. Now I want to create a drilldown to a separate dashboard whenever I click on any of the bars (Separate dashboards for... See more...
Hi All, I have created a bar chart on my dashboard with the count of Exceptions. Now I want to create a drilldown to a separate dashboard whenever I click on any of the bars (Separate dashboards for each bars). Can we achieve such drilldown from bar/column charts? I tried "Link it a dashboard", "Link to a custom URL" etc. but I takes me to only one dashboard whenever I click on any of the bars in the chart. Also thought of using "Manage tokens on this dashboard" option but that doesn't take me to a new dashboard as it only enables for in-page drilldown actions only. Please help suggest a way to get my desired output.    Thank you..!!
I used this eval statement with AND conditions but I'm only getting result as "Public" even when the condition satisfies for value "Private" i.e. I'm only getting default result. Any idea of what's w... See more...
I used this eval statement with AND conditions but I'm only getting result as "Public" even when the condition satisfies for value "Private" i.e. I'm only getting default result. Any idea of what's wrong with this statement? | eval perm=case(block_public_acls=true AND block_public_policy=true AND ignore_public_acls=true AND restrict_public_buckets=true,"Private",1=1,"Public")  
Hello, I'm using the trial version  (60 days) of Splunk  Version:8.2.2.1 which I installed few days ago on my windows machine I changed the licensing to Heavy forwarder, but I try to a search I got... See more...
Hello, I'm using the trial version  (60 days) of Splunk  Version:8.2.2.1 which I installed few days ago on my windows machine I changed the licensing to Heavy forwarder, but I try to a search I got this error Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk license by visiting www.splunk.com/store or calling 866.GET.SPLUNK. My  APP Data is local performance monitoring ( Processor : Select counter C1 Time, User Time DPC Rate, and selected instances 0 1 2 3  total ) I can not go back the trial entreprise license I don't exceed the trial period. is it possible to rollback this configuration and go back to trial enterprise license ? Thanks
Hi guys... I have a splunk forwarder instance v8.2.1 on a AIX server. I have a custom app configured on which I am monitoring a few logs and forwarding them to an indexer.  I am having a weird probl... See more...
Hi guys... I have a splunk forwarder instance v8.2.1 on a AIX server. I have a custom app configured on which I am monitoring a few logs and forwarding them to an indexer.  I am having a weird problem where the forwarder stops sending data every day at 1 PM and resumes sending data feed at 1 AM. So, I would have no data consumed between 1 PM to 1AM. Any suggestions on what could be the issue ?  However, I am also forwarding splunkd.log to the same indexers and I see that log data all thru the day. The issue I am facing is only with one of the custom app I have on this instance.  I am sharing inputs.conf and props.conf entries  ========== inputs.conf ========= [monitor:///log/mycustomereport/mycustomereport.log*] disabled = false followTail = 0 sourcetype =mycustomereport blacklist = \.gz index = 20000_java_app_idx ignoreOlderThan=2h ========== props.conf ========= [mycustomereport] TIME_PREFIX=\w+\| TIME_FORMAT=%m/%d/%Y %I:%M:%S %3Q %p TRUNCATE = 0 MAX_EVENTS = 10000 SHOULD_LINEMERGE = false KV_MODE = none LINE_BREAKER = ([\n\r]+)mycustomereport MAX_TIMESTAMP_LOOKAHEAD = 40 PS: I do see that log file I am monitoring is having data written to it consistently.  I did enable debug logs... i dont see anything written which could helped me understand the issue. I also dont see any crash file generated. 
Pretty much the title. I have created alerts using IT Essentials Learn app. The alert is running because I receive alerts to slack. However, I cannot figure out where the alert is housed so I am unab... See more...
Pretty much the title. I have created alerts using IT Essentials Learn app. The alert is running because I receive alerts to slack. However, I cannot figure out where the alert is housed so I am unable to return to the edit screen and modify the alert. I've looked through both IT apps as well as the Search and Reporting app alerts panel. I cannot find the alerts anywhere. Where are they housed?
The error says "Threat list download from  https://raw.githubusercontent.com/mitre/cti/master/enterprise-attack/enterprise-attack.json Can not be downloaded. I have contacted the vendor of the App.... See more...
The error says "Threat list download from  https://raw.githubusercontent.com/mitre/cti/master/enterprise-attack/enterprise-attack.json Can not be downloaded. I have contacted the vendor of the App. few times, No go! Please advise.  
Hi There,   I have two queries [Query 1  and Query 2].  what i am planning to achieve is that when user clicks on the server_ID for tabular output of Query 1, then it should be passed as INPUT to t... See more...
Hi There,   I have two queries [Query 1  and Query 2].  what i am planning to achieve is that when user clicks on the server_ID for tabular output of Query 1, then it should be passed as INPUT to the WHERE clause in Query 2  . Any help would be appreciated.     Query 1: index=<<index_name>>   sourcetype=webserver | dedup server_ID | table  server_ID   Query 1 Output: server_ID 49552473-567 d5eedf55-dca 5d4bb774-74a 03f03042-1f7   Query 2:   index=<< index_name>>   "Exception" | where  server_ID= "server_ID from Query1 table"     Thank You
I have two searches with  three fields in common but two field that differ. I'm trying to find returns  that don't have a matching sale for that company_name, mid, and card_number. The return and sal... See more...
I have two searches with  three fields in common but two field that differ. I'm trying to find returns  that don't have a matching sale for that company_name, mid, and card_number. The return and sales fields are both dollar amounts.  "total" is the dollar amount of the transaction, return or sale  index=X sourcetype=Y  earliest=-1d@d latest=@d | where transaction_type= "refund" | stats values(total) as returns by company_name, mid, card_number | append [  search index=X sourcetype=Y earliest=-30d@d latest=@d    |  where transaction_type= "sale"    | stats values(total) as sales by company_name, mid, card_number ] Currently I have this search that pulls all return transactions from the past day, as well as every sale from the past month, the results look like this           company.      MID.       card num.        returns   sales +-------------------------------------------------------------------+  | company A | 1234 | 1234***7890 |  50.00 |                  |  | company B | 1254 | 1234***1234 |  80.00 |                  |  | company C | 1236 | 1234***1230 |  75.00 |                  |  | company A | 1234 | 1234***7890 |               | 50.00     |  | company B | 1254 | 1234***1234 |               | 30.00     |  | company C | 1236 | 1234***1230 |               | 75.00     | You can see company B has refunded the card number 1234***1234 for the amount of $80.00,  but there was not a sale to that card in that amount. I would like my search to exclusively display the rows (with the return amount only, not any sale numbers) where this happens. so Ideally the search would have returned just one row:    company.      MID.       card num.        returns  +------------------------------------------------------+  | company B | 1254 | 1234***1234 |  80.00 |                  
Anyone noticed that there is a big difference between what the MC displays in for the hot volume under indexers->indexes and volumes->volume Detail and what the OS reports??  I have our indexers show... See more...
Anyone noticed that there is a big difference between what the MC displays in for the hot volume under indexers->indexes and volumes->volume Detail and what the OS reports??  I have our indexers showing that between 11-12 TBs are being used up for the hot volume, while the MC is reporting that they are not even at 10 TBs.  That seems like a really big difference to me...  I can deal with checking each box, but I like have the MC reporting where I can see all the indexers in one panel.  Thanks!
Our organization has been using the AppDynamics Java agent with IBM Websphere for a number of years. This is our first attempt to get the Java agent working with Tomcat. When the Tomcat application s... See more...
Our organization has been using the AppDynamics Java agent with IBM Websphere for a number of years. This is our first attempt to get the Java agent working with Tomcat. When the Tomcat application server starts up, we get Connection Refused error messages in the java agent logs. Stack Trace Error Message: org.apache.http.conn.HttpHostConnectException: Connect to server.domain.name:443 [server.domain.name/<IP Address>] failed: Connection refused (Connection refused)         at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) ~[httpclient-4.5.13.jar:4.5.13]         at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) ~[httpclient-4.5.13.jar:4.5.13]         at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) ~[httpclient-4.5.13.jar:4.5.13]         at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) ~[httpclient-4.5.13.jar:4.5.13]         at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) ~[httpclient-4.5.13.jar:4.5.13]         at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) ~[httpclient-4.5.13.jar:4.5.13] Configuration information:       -Dappdynamics.controller.hostName=server.domain.name \       -Dappdynamics.controller.port=443 \       -Dappdynamics.agent.applicationName=APP-NAME \       -Dappdynamics.agent.tierName=Services \       -Dappdynamics.agent.nodeName=APP1_Node1 \       -Dappdynamics.agent.accountName=customer1 \       -Dappdynamics.bciengine.should.implement.new.interfaces=false \       -Dappdynamics.agent.accountAccessKey=<account number>\ What kind of things could cause a connection refused with the Java agent running in Tomcat? What should I be looking for in the logs? Is there a known difference in connection behaviour between Websphere and Tomcat? Any help would be appreciated. Thanks! Dale Chapman
Hi.  I am trying to set up alerts to notify when the response time is greater than 1000 milli seconds. The alert has to search for every minute or for every 5 minutes.  Below is the query which I h... See more...
Hi.  I am trying to set up alerts to notify when the response time is greater than 1000 milli seconds. The alert has to search for every minute or for every 5 minutes.  Below is the query which I have used. index=testIndex sourcetype=testSourceType basicQuery | where executionTime>1000 | stats count by app_name, executionTime After running the query by setting it for "Last 5 minutes" in the dropdown beside search icon, I am getting results. Then I have saved the query as an Alert  with time range set to "last 5 minutes" and Cron Expression set to "*/1 * * * *" to run it for every 1 minute in the last 5 minutes.  Is this a correct approach ? Main point is: I don't want to miss any events with response time more than 1000msec.    Also, what is the difference between setting time in dropdown and  earliest=-5m latest=now ?  Can someone please help me ?    Thanks in Advance. 
Need some assistance from the experts. I have two queries below which I would like to merge on id. Query 1 index=aws sourcetype=aws:cloudtrail eventName=RebootInstances | table _time userName sour... See more...
Need some assistance from the experts. I have two queries below which I would like to merge on id. Query 1 index=aws sourcetype=aws:cloudtrail eventName=RebootInstances | table _time userName sourceIPAddress requestParameters.instancesSet.items{}.instanceId | rename requestParameters.instancesSet.items{}.instanceId as id Query 2 index=aws sourcetype=aws:description source="us-east-2:ec2_instances" | table id private_ip_address   I would like the final table fields to be: time  userName  sourceIPAddress    id   private_ip_address   Any assistance given will be appreciated.