All Topics

Top

All Topics

Hello, I have recently starting learning about Splunk and been stuck while attempting to make the search display for me events that comes from both my Linux and Windows machine at once. For example, ... See more...
Hello, I have recently starting learning about Splunk and been stuck while attempting to make the search display for me events that comes from both my Linux and Windows machine at once. For example, for Windows, I have created this query that counts and display the times EventID 4625 has fail password from ssh per minute sourcetype="wineventlog:security" OR source="/var/log/auth.log" host="CLIENT1-DESKTOP" OR host="client3-kali" Account_Name=client1 failed password* | bucket span=1m _time | stats count by _time, host, source, Caller_Process_Name, Account_Name, EventCode Failure_Reason | table _time, host, source, EventCode, count, Caller_Process_Name, Account_Name, EventCode, Failure_Reason And I have this query for a Linux that does the same: index=* sourcetype="wineventlog:security" OR source="/var/log/auth.log" host="CLIENT1-DESKTOP" OR host="client3-kali" OR user=* failed password* | bucket span=1m _time | stats count by _time, host, process, source, | table _time, host, source, process, count The issue is, whenever I am trying to make it display both Linux and Windows events at once, by providing it the fields together such as: process(Linux Related) Event Code(Windows Related) Account_Name(Windows Related) user(Linux Related) With this query: sourcetype="wineventlog:security" OR source="/var/log/auth.log" host="CLIENT1-DESKTOP" OR host="client3-kali" failed password* | bucket span=1m _time | stats count by _time, host, source, EventCode | table _time, host, source, EventCode Then it will only display me the Windows logs, and this is just because the EventCode was added. If I will for example remove the  "EventCode" and past it as: sourcetype="wineventlog:security" OR source="/var/log/auth.log" host="CLIENT1-DESKTOP" OR host="client3-kali" failed password* | bucket span=1m _time | stats count by _time, host, source, | table _time, host, source   Then both will appear in the screen, but without the filters I want. I am confused, anyone can help me please? Thanks!
I am getting the below error, but my outputs.conf is already configured.       Here is the outputs.conf server.conf Checked metics.log nothing is blocked.
Hi Gentlemen, I'm working for an API security company, we provide vulnerability detection and real-time detection and prevention. We are now working on integrating our platform with Splunk and so... See more...
Hi Gentlemen, I'm working for an API security company, we provide vulnerability detection and real-time detection and prevention. We are now working on integrating our platform with Splunk and some question popped-up as part of the process: To which version and products of Splunk we should make the integration? is it a generic integration to all of them and we only need to switch platform, or it's different for each one?  How should we send the data to Splunk? we thought about syslog, is there any other recommended way? What kind of data is most recommended that we send for Splunk?  Can we create rules and actions through the integration with Splunk? (e.g WAF rule) What is the best practice to make the integration and test it? should we raise a Splunk environment, if so which one and those Splunk have any support for this processes? Additionally I would like to understand whether we need to send data differently if the type of data is different, for example, let's assume I'm sending both vulnerabilities and anomalies, should I send both of them to the same place? or there is different location for each one of them.   Thanks in advance, Jonathan.
Hi All, I have a join query that works perfectly fine for my use case, but I was trying to see if I can write this using the stats or a more performative command. I'm trying to pull a report for tr... See more...
Hi All, I have a join query that works perfectly fine for my use case, but I was trying to see if I can write this using the stats or a more performative command. I'm trying to pull a report for transactions with their status. These are from a single source file. A log entry is created when the event is started, and another log is created when the event completes. There are also possibilities of the start event repeating itself since it did not complete the first time. Here's my query with a join index=en source="/merchant.log" host="merc.com" event="start" | dedup src_key | join type=outer joinkey [search source="/merchant.log" host="merc.com" event='complete" success="true" | table joinkey, resultcode] | table src_key, area, resultcode, _time, txn_amt The closest I got using stats was with https://community.splunk.com/t5/Splunk-Search/Alternative-method-to-using-Join-command/m-p/532978#M150560.
Hi there,   I have 2 messages that log when a job is run, which share a job_id field  event_name=process.start  event_name=process.end   I'm trying to create an alert that fires if there ... See more...
Hi there,   I have 2 messages that log when a job is run, which share a job_id field  event_name=process.start  event_name=process.end   I'm trying to create an alert that fires if there is an event_name=process.start , but no event_name=process.end , after 3 hours. I've seen lots of examples of using transactions between 2 events to get the duration, but not any if an event is missing. Many thanks, apologies if this is a noob question
Hi all, I need to write a query that checks whether  (Daily AH <= Daily Po <= Daily Risk <= Daily File <= Daily Instrum)  condition is met for each row. If the condition is not met get rid of the r... See more...
Hi all, I need to write a query that checks whether  (Daily AH <= Daily Po <= Daily Risk <= Daily File <= Daily Instrum)  condition is met for each row. If the condition is not met get rid of the row value that did not meet the condition and all the values after it.      
Hi All, We are facing an issue while the alert sent for any health rule violation to the customer is in UTC timezone. So please let us know if there is any way that we can change the timezone fro... See more...
Hi All, We are facing an issue while the alert sent for any health rule violation to the customer is in UTC timezone. So please let us know if there is any way that we can change the timezone from UTC to Local TimeZone so that the customer can receive the alert/health rule violation time in the local timezone.
Hi all, I have few queries to be modified using tstats: I am new to splunk, please let me know whether these queries can be converted into tstats. Query1: index=abc  "NEW"  "/resource/page"  a... See more...
Hi all, I have few queries to be modified using tstats: I am new to splunk, please let me know whether these queries can be converted into tstats. Query1: index=abc  "NEW"  "/resource/page"  appname=ui OR appname=uz  |stats  avg(response_time). Query2: index=abc  sourcetype=abc  host=ghjy   "transaction" NOT "user" |stats avg(ResponseTime) Query3: index=abc  iru=/resiurce/page  appname=ui NOT 1234 NOT 1991 NOT 2022 "Bank status" |stats count  
Hello, I am trying to monitor an application log and have Splunk generate an alert only when the  service_status = "disconnected" and  service_status="connected" entries are logged and the time betw... See more...
Hello, I am trying to monitor an application log and have Splunk generate an alert only when the  service_status = "disconnected" and  service_status="connected" entries are logged and the time between the two is greater than the span of 10 seconds OR if the Service_status = "disconnected" is the only entry being logged.   I've been experimenting with the transaction command but I am not getting the desired results.  Thanks in advance for any help with this. Example log entries: --- service is okay, do not generate an alert.--- 9/2/2022 00:10:36.683   service_status = "disconnected" 9/2/2022 00:10:38.236  service_status="connected"   --- service is down, generate an alert.--- 9/2/2022 00:10:40.683   service_status = "disconnected" 9/2/2022 00:10:51.736  service_status="connected"   --- service is down,  service_status="connected" event is missing,  generate an alert.--- 9/2/2022 01:15:15.603   service_status = "disconnected"  
I inherited a splunk mesh of search-heads, deployment server, index cluster, etc. I am trying to figure out all this splunk stuff, but ran into an issue that I am not sure if it ignores best practice... See more...
I inherited a splunk mesh of search-heads, deployment server, index cluster, etc. I am trying to figure out all this splunk stuff, but ran into an issue that I am not sure if it ignores best practice, poor judgement, or as intended. We have 8 main indexers that do what indexers do, all clustered as peer nodes. The deployment server is the master node and the search head for the cluster (which I don't understand that since we also have 5 main separate search heads).  We also have a disaster recovery DR site that has an indexer as a peer node part of the aforementioned cluster. The cluster has a Replication factor 3, for # of copies of raw data. The cluster has a Search factor of 2, for # of searchable copies. Newer to cyber so forgive me if I don't understand right away or if I am missing something glaringly obvious. But does it makes sense to have the DR indexer be part of the cluster? If it does makes sense, then how do i ensure that the other 8 indexers send a copy of all their stuff to the DR indexer? I thought the master node just kind of juggles the incoming streams from the forwarders and balances the data across all the indexers.  Also; - should the deployment server double as a master node and search head for the index cluster? - what is the difference between the 5 main separate search heads and the search head in the index cluster? - (last one, i swear) would it make sense to have a search head cluster, or keep the search heads separate as they the 5 are accessed and used by different groups (networking, development, QA/testing, cybersecurity, and UBA (which we dont even have UBA servers active right now cuz I cannot get them to work or web ui to launch))
Hi,  I am trying to get SQL Performance monitoring logs into our environment for one of our ITSI use cases The event successfully comes into our event index however I would like to convert these ... See more...
Hi,  I am trying to get SQL Performance monitoring logs into our environment for one of our ITSI use cases The event successfully comes into our event index however I would like to convert these performance monitoring sql logs into metrics as it will work much better with ITSI  I am struggling to convert the logs into metrics and am using the following documentation to help me do so -  https://docs.splunk.com/Documentation/Splunk/9.0.1/Data/Extractfieldsfromfileswithstructureddata   Here are my props and transforms conf files for 1 of the sql perfmon inputs props.conf      [Perfmon:sqlserverhost:physicaldisk] TRANSFORMS-field_value = field_extraction TRANSFORMS-sqlphysicaldiskmetrics = eval_sqlphysicaldiskcounter METRIC-SCHEMA-TRANSFORMS = metric-schema:extract_sqlphysicaldisk     transforms.conf     [eval_sqlphysicaldiskcounter] INGEST_EVAL = metric_name=counter [metric-schema:extract_sqlphysicaldisk] METRIC-SCHEMA-MEASURES = _ALLNUMS_       My SQL index where i would like these logs to go into does not have the "datatype=metrics" setting as i thought this should convert the events into metrics regardless, also i changed this setting so that the datatype = metrics but this removed all the data entirely and no data was populated into the sql index  I can still see the event data populating in the SQL index but it cannot be searched using the metrics commands (mstats, mcatalog etc)  Note - There are 8 counter field values which i would like to convert individually into metrics hence why i set the metric_name = counter. I did not break it down individually into separate settings under the transforms.conf due to there being spaces in the field values   Any idea why this is failing and how i can fix this? Any help would be greatly appreciated!  Any questions please ask!  Thanks    
Recently learned of this new Splunk app for Red Hat Insights. Just curious if anyone is currently utilizing in with their Splunk Cloud. We looked into installing the app but says the app does not sup... See more...
Recently learned of this new Splunk app for Red Hat Insights. Just curious if anyone is currently utilizing in with their Splunk Cloud. We looked into installing the app but says the app does not support search head cluster deployments. I realize Splunk defers all support to Red Hat for this application and exploring if anyone came across this issue. 
For example I have getting splunk logs with 4 fields    Time Event time 1 service = "service1"  | operation = "sampleOperation1" | responseTime = "10" | requestId = "sampleRequestId1" ti... See more...
For example I have getting splunk logs with 4 fields    Time Event time 1 service = "service1"  | operation = "sampleOperation1" | responseTime = "10" | requestId = "sampleRequestId1" time2 service = "service2"  | operation = "sampleOperation2" | responseTime = "60" | requestId = "sampleRequestId2" time3 service = "service2"  | operation = "sampleOperation2" | responseTime = "60" | requestId = "uniqueRequestId3" time4 service = "service4"  | operation = "sampleOperation4" | responseTime = "30" | requestId = "sampleRequestId4"   My objective is to find from all the logs if count is greater then 20 for  combination of (service,operation) with reponseTime>40. Expected Output service1  operation1  [sampleRequestId2,uniqueRequestId3]   The query I have for now is search here...... | stats count(eval(responseTime>60)) as reponseCount by service, operation | eval title= case(                               match(service,"service2") AND reponseCount>20, "alert1", ) | search title=* | table title,service   But here I cannot refer to requestId which is being dropped. Please suggest if you any solution.
Hi All ,    So i was trying to create an global field for a newly indexed data , so trying out with automatic lookup settings . Ex- in the indexed data - datacenter name is not mentioned , so wan... See more...
Hi All ,    So i was trying to create an global field for a newly indexed data , so trying out with automatic lookup settings . Ex- in the indexed data - datacenter name is not mentioned , so wanted to populate it using automatic lookup . I am able to do that , but for only 1 sourcetype , i have 100+ sourcetypes , is there any way to define apply to - sourcetype/hosts to multiple one . Please let me know .
Hello everyone! I have time in such format 2022-09-02T18:44:15, this time in GMT+3, and I need to change convert this time to UTC. Can you help me? 
Splunk Connect for Zoom stopped working after Zoom enforced use of ssl certificates on 2022/07/20 After support tickets with Zoom and Splunk here are some experience would like to share. Using sign... See more...
Splunk Connect for Zoom stopped working after Zoom enforced use of ssl certificates on 2022/07/20 After support tickets with Zoom and Splunk here are some experience would like to share. Using signed ssl certificates private or internal CA did not work. It seems that I had to use a certificate signed a commercial CA like Entrust. If you want to chain your ssl certificate with Entrust root and intermediate certificates, please ensure that the certificates are in the order as follows after running the command: openssl crl2pkcs7 -nocrl -certfile yoursslcertificate.entrust.pem | openssl pkcs7 -print_certs -noout Or you could just include the commercially issued ssl certificate without the root and intermediate certificates.       subject=/C=US/ST=STATE/L=CITY/O=ORG, Inc./CN=mycompany.com issuer=/C=US/O=Entrust, Inc./OU=See www.entrust.net/legal-terms/OU=(c) 2012 Entrust, Inc. - for authorized use only/CN=Entrust Certification Authority - L1K subject=/C=US/O=Entrust, Inc./OU=See www.entrust.net/legal-terms/OU=(c) 2012 Entrust, Inc. - for authorized use only/CN=Entrust Certification Authority - L1K issuer=/C=US/O=Entrust, Inc./OU=See www.entrust.net/legal-terms/OU=(c) 2009 Entrust, Inc. - for authorized use only/CN=Entrust Root Certification Authority - G2 subject=/C=US/O=Entrust, Inc./OU=See www.entrust.net/legal-terms/OU=(c) 2009 Entrust, Inc. - for authorized use only/CN=Entrust Root Certification Authority - G2 issuer=/C=US/O=Entrust, Inc./OU=See www.entrust.net/legal-terms/OU=(c) 2009 Entrust, Inc. - for authorized use only/CN=Entrust Root Certification Authority - G2       If all works after restarting Splunk, running the netstat -nap |grep 9997 will show the following connections from Zoom ip addresses and you would see logs under the sourcetype=zoom:webhook       tcp 0 0 0.0.0.0:4443 0.0.0.0:* LISTEN 25849/python3.7 tcp 0 0 10.#.#.#:4443 3.235.82.171:41101 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.171:58497 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.171:54514 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.172:48513 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.171:53006 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.172:55259 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.172:46028 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.172:52837 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.172:7527 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.82.171:12934 TIME_WAIT - tcp 0 0 10.#.#.#:4443 3.235.83.101:32088 TIME_WAIT -          
Hello all, how is possible to change default dump folder on Windows?
I had so much trouble with this but figured I would share what I did to make it work for me. You may have other ways of doing it but I found very little guidance online to help someone going through ... See more...
I had so much trouble with this but figured I would share what I did to make it work for me. You may have other ways of doing it but I found very little guidance online to help someone going through the process. If you have done other things that worked for you, feel free to reply and share.
Hello all, a Splunk newbie here. For the company that I work for we want to monitor some licenses that are being used. The logs show the user the type of license that they have. The type can be for... See more...
Hello all, a Splunk newbie here. For the company that I work for we want to monitor some licenses that are being used. The logs show the user the type of license that they have. The type can be for the most part IN (not using) or OUT (using the license) and sometimes DENIED but that is not of interest currently. Because sometimes users forget to log off we want to take this into account by looking at the data over the past 2 weeks. I count the most recent type for each user and focus if the type is OUT. Because this means that the user is using a license. This gives a count of OUT over the past 2 weeks, which is pretty accurate with what the license manager shows. This count of OUT over the past 2 weeks is needed to be shown every 5 minutes on a (time)chart. So, is it possible to have a (time)chart that runs a count over the past 2 weeks every 5 minutes? For the query I have: base search | dedup 1 user sortby -_time | table user type _time | search type=out This gives me only the users that have a type OUT, which means these are the ones that are using a license. Again, I would like to count the number of OUTS these past 2 weeks and have that number calculated every 5 minutes and shown on a (time)chart. I have tried loads of stuff (from other posts) but I did not manage to get it to work. There already is a workaround where we use an ETL tool with the Splunk API as middleware, but I thought there should be a more efficient way to do it. If any more info is needed I (hopefully) can provide that, Thanks in advance, M.
Can we configure input.conf define port with multiple sourcetype? For ex. [tcp://6134] index = top sourcetype = mac_log sourcetype= tac_log disabled = 0 Or  Is there any way to segregate... See more...
Can we configure input.conf define port with multiple sourcetype? For ex. [tcp://6134] index = top sourcetype = mac_log sourcetype= tac_log disabled = 0 Or  Is there any way to segregate logs coming in one port with different sourcetypes?