All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello i have 3 SH servers, 1 master , 3 indexers all of them are running with kubernetese and docker image i want to deploy sh cluster  i understand that i have to deploy deployer server. can it ... See more...
Hello i have 3 SH servers, 1 master , 3 indexers all of them are running with kubernetese and docker image i want to deploy sh cluster  i understand that i have to deploy deployer server. can it be part of the master server or should i create new server just for this purpose ?  what is the best way to achieve my goal ?   thanks
Has anyone had experience to detect Golden ticket attack using SPL?
Hi, Below used query is working perfectly fine when i searched directly in SPLUNK WEB. but when i use the same query in CURL it's not working. I can able to run basic search using CURL but not this ... See more...
Hi, Below used query is working perfectly fine when i searched directly in SPLUNK WEB. but when i use the same query in CURL it's not working. I can able to run basic search using CURL but not this query. Kindly help me on this. Here is the query i used: curl -k -u UserName:Passwd https://splunkurl:port/services/search/jobs/export --data-urlencode search="search cs_uri_stem="*/reporting/wkReport.xls" AND (cs_uri_query="reportName=Pay+Certification" OR cs_uri_query="reportName=CS+Monthly+Payroll+Cost*")|stats count by AssociateOID, OrgOID, date, o, reportName" -d output_mode=csv Output shows FATEL error. I removed double qoutes with single quotes in search string and it gives me different error. query: curl -k -u UserName:Passwd https://splunkurl:port/services/search/jobs/export --data-urlencode search="search cs_uri_stem="*/reporting/wkReport.xls" AND (cs_uri_query="reportName=Pay+Certification" OR cs_uri_query="reportName=CS+Monthly+Payroll+Cost*")|stats count by AssociateOID, OrgOID, date, o, reportName" -d output_mode=csv Output shows stats is not recognized as internal/external command. Kindly help me out on this.    
I'm interested in the mechanics of a base search (for a dashboard). Where would the results of a base search be stored? Is this held in memory?  I'm thinking that if you have a large number of users... See more...
I'm interested in the mechanics of a base search (for a dashboard). Where would the results of a base search be stored? Is this held in memory?  I'm thinking that if you have a large number of users, effectively running many base searches, the storage requirement might become quite big? I have experimented with storing intermediate search results in a kvstore (using a custom command) to accelerate the searches in my dashboards, but this has the problem that it does not scale well. If I have concurrent users, one might overwrite the data in the kvstore while the other is running a search. 
We are embedding our Splunk reports into another website and this was working fine but now we get am error    "No cookie support detected. Check your browser configuration."   I'm assuming th... See more...
We are embedding our Splunk reports into another website and this was working fine but now we get am error    "No cookie support detected. Check your browser configuration."   I'm assuming this is down to a change in the way browsers are handling cookies, however I'm unable to find a way around it. The site is running HTTPS and we have the changed the following config [settings] enableSplunkWebSSL = true enable_insecure_login = True x_frame_options_sameorigin = false     Does anyone have any ideas.
Hello Splunk Community,   I am looking for some help.   I would like to make an audit of all fields where there is not NULL for a given event. Which means I want a table with all fields where the... See more...
Hello Splunk Community,   I am looking for some help.   I would like to make an audit of all fields where there is not NULL for a given event. Which means I want a table with all fields where the vaule is not NULL. The thing is I do not want to have to specify the fields as there are too many and I am creating an audit of all fields that have values. It would take too much time to specify the field names in the search and or table. Hence, I am looking for a solution that lists all fields != NULL I tried:  |fillnull value="NULL" |search NOT "NULL" |table*   and many other searches with metadata, metasearch and audit commands. I cannot seem to find the right syntax to exclude fields with a given field value eg NULL or simply empty ones.   Thank you in advance for you help!!   best, julia 
Hello, I am trying to delete data from _audit index. Currently it contains last 6 years data and occupying lot of space. I modified the $SPLUNK_HOME/etc/system/default/indexes.conf and added below un... See more...
Hello, I am trying to delete data from _audit index. Currently it contains last 6 years data and occupying lot of space. I modified the $SPLUNK_HOME/etc/system/default/indexes.conf and added below under _audit stanza:     [_audit] FrozenTimePeriodInSecs = 3153600     I restarted the splunk after making the changes. But I still see older data under Audit. Can you please help in finding what is wrong here? Do I need to make any additional changes or invoke anything to reflect the changes? Thanks in advance for your help.
Hi, in my logs I have a field named report that contains a lot of informations: Report=Windows Failed\\Passed_Conditions[]:Failed_Conditions[antivirus_update]:Skipped_Conditions[])\,MACAddress=XXXX... See more...
Hi, in my logs I have a field named report that contains a lot of informations: Report=Windows Failed\\Passed_Conditions[]:Failed_Conditions[antivirus_update]:Skipped_Conditions[])\,MACAddress=XXXXXXXXXX\,Framed-IP-Address=XXXXX\.   What I need is only the Failed_Conditions vector, so the content between []. The content could be different so I think I need a regex.   Thank you in advance!!
Hi Team, We have configured custom queries to get us details on which database query takes a long time to execute. We are getting expected details however when we're configuring a policy to notify... See more...
Hi Team, We have configured custom queries to get us details on which database query takes a long time to execute. We are getting expected details however when we're configuring a policy to notify us on such queries, we've noticed that only an event summary is mentioned in the email. It would be greatly helpful to customers if we're able to send event details too in email alerts or using dashboard & Report, which we are unable to do so. Thanks Mohit Jain
Hello Splunk Community members, I am facing an issue with Tripwire, on splunk. It is generating this humongous file ( te_assets.csv ) approximately 26 GB within an hour. Due to this the splunk servi... See more...
Hello Splunk Community members, I am facing an issue with Tripwire, on splunk. It is generating this humongous file ( te_assets.csv ) approximately 26 GB within an hour. Due to this the splunk service stops as in the Search Head there is only a single partition. Seems like Tripwire keeps updating splunk lookup with a fresh copy of assets data. Is there a way to limit the generated lookup file on Splunk? Or any config changes to be done from Tripwire Side? Your help/feedback is highly appreciated.   Thanks
In https://docs.splunk.com/Documentation/Splunk/8.0.7/Indexer/AboutSmartStore, there is a statement saying that "The home path and cold path of each index must point to the same partition." I am con... See more...
In https://docs.splunk.com/Documentation/Splunk/8.0.7/Indexer/AboutSmartStore, there is a statement saying that "The home path and cold path of each index must point to the same partition." I am considering migrating some of the indexes in my clustered indexers to SmartStore.  However, most of my indexes have their home path on an SSD partition, with the cold path on a HDD partition.   With the statement above, does it mean that I cannot migrate them to SmartStore?
Hallo Team, Need some help regarding Certificates and SSO.From December 14th onwards, we are unable to access our Splunk Prod and Dev instances through SSO over the internet. It gives the site cannot... See more...
Hallo Team, Need some help regarding Certificates and SSO.From December 14th onwards, we are unable to access our Splunk Prod and Dev instances through SSO over the internet. It gives the site cannot be reached error.SSO is powered by Ping Federate and the SSO Team informed that SplunkServer Default Certificate has got expired and hence the issue.Upon checking further for both Prod and Dev instances , we found that - server.pem got expired on December 14th 2020. idpCert.pem is expiring on January 15th 2021.     We generated a new server.pem file in the test environment but it seemed to be a combination of certificates and a private key.We used the following method to create the server.pem 1. Run the command: $SPLUNK_HOME\bin\openssl x509 -enddate -noout -in $SPLUNK_HOME/etc/auth/server.pem 2. Check the expiry date of output if expired then do the below steps: 3. Go to $SPLUNK_HOME\etc\auth\ 4. Rename server.pem to server.pem_backup 5. Restart the splunk using command ./splunk restart 6. After restart you will be able to see a new server.pem file. 7. Check the expiry date of Certificate now using command: $SPLUNK_HOME\bin\openssl x509 -enddate -noout -in $SPLUNK_HOME/etc/auth/server.pem 8. The expiry date will be extended. server.pem file --> <BEGIN CERTIFICATE> --------------------- --------------------- <END CERTIFICATE><BEGIN ENCRYPTED PRIVATE KEY> --------------- ---------------- <END ENCRYPTED PRIVATE KEY><BEGIN CERTIFICATE> --------------------- --------------------- <END CERTIFICATE> Where as the crt file the SSO team showed us was in the following format: <BEGIN CERTIFICATE> --------------------- --------------------- <END CERTIFICATE> For SAML we have this ( removed the lines with password & client info) : [SAML] caCertFile = /opt/splunk/etc/auth/cacert.pem idpCertPath = idpCert.pem sslKeysfile = /opt/splunk/etc/auth/server.pem sslVerifyServerCert = false sslVersions = SSL3,TLS1.0,TLS1.1,TLS1.2 ssoBinding = HTTPRedirect 1- Do we need to make separate .crt file by making use of the first stanza enclosed between Begin and End Certificate from this pem file and give to SSO team?in authentication.conf 2- Since the idpCert is getting expired soon, do we need to get a new idpCert from the SSO provider and place it in the idpCertPath ? Are there any other things that need be taken care of? Could anyone help me with these ? Any help would be really appreciated!
Hello,   We have wmi.conf in place like below [WMI:services] wql = SELECT * FROM Win32_Process where Name like '%search%' OR Name like '%w3wp%' interval = 300 disabled = 0 server = TestIN001 ... See more...
Hello,   We have wmi.conf in place like below [WMI:services] wql = SELECT * FROM Win32_Process where Name like '%search%' OR Name like '%w3wp%' interval = 300 disabled = 0 server = TestIN001 We wants to have events with environment like prod, stage & test and i have tried like below also but no luck [WMI:services] wql = SELECT * FROM Win32_Process where Name like '%search%' OR Name like '%w3wp%' interval = 300 disabled = 0 server = TestIN001 _meta = environment::prod   Kindly help me on this if it is feasible somehow.    
@dmarling  Hi,  I've replaced join in the below query and posted that query as well but I'm not getting proper output correct me if I made any mistake Query with join: "index=168347-np [| `last_... See more...
@dmarling  Hi,  I've replaced join in the below query and posted that query as well but I'm not getting proper output correct me if I made any mistake Query with join: "index=168347-np [| `last_np_sourcetype("index=168347-np", "hw_eox")`] currentEoxMilestone=EoSCR OR currentEoxMilestone=LDoS (physicalType=*) | fields deviceId, deviceName, physicalElementId, hwEoxId | join deviceId [ search index=168347-np [| `last_np_sourcetype( "index=168347-np", "group_members")` ] groupId=290681 | fields deviceId ] | fields deviceName, physicalElementId, hwEoxId | dedup physicalElementId | table deviceName, physicalElementId, hwEoxId | stats count as Devices by hwEoxId | stats sum(Devices) as Devices " Replaced join from the above query: "index=168347-np ([| `last_np_sourcetype("index=168347-np", "hw_eox")`] currentEoxMilestone=EoSCR OR currentEoxMilestone=LDoS (physicalType=*)) OR ([| `last_np_sourcetype( "index=168347-np", "group_members")` ] groupId=290681) | fields deviceId deviceName physicalElementId hwEoxId sourcetype | stats values(sourcetype) as sourcetype values(deviceName) as deviceName values(physicalElementId) as physicalElementId values(hwEoxId) as hwEoxId by deviceId | search sourcetype=hw_eox sourcetype=group_members | table deviceName, physicalElementId, hwEoxId | stats count as Devices by hwEoxId | stats sum(Devices) as Devices"   Thanks
Hi all, Need help to build a query which helps   to identify the users that possibly leaking /auto-forwarding emails to his personal email address (e.g. gmail) based on Exchange logs and also need t... See more...
Hi all, Need help to build a query which helps   to identify the users that possibly leaking /auto-forwarding emails to his personal email address (e.g. gmail) based on Exchange logs and also need to add exceptions when some forwarding are approved Sample exchange log 1 : {"AppId": "00000002-0000-0ff1-ce00-000000000000", "ClientAppId": "", "ClientIP": "98.98.98.98:65426", "CreationTime": "2020-12-11T14:44:23", "ExternalAccess": false, "Id": "720fba63-b1bf-4578-1eab-08d89de34066", "ObjectId": "123456\\1234567890", "Operation": "Set-InboxRule", "OrganizationId": "master", "OrganizationName": "abc.com", "OriginatingServer": "ABCDEFGHIJ (15.20.3632.023)", "Parameters": [{"Name": "AlwaysDeleteOutlookRulesBlob", "Value": "False"}, {"Name": "Force", "Value": "False"}, {"Name": "Identity", "Value": "Test"}, {"Name": "ForwardTo", "Value": "david@123.com;sam@abc.com"}, {"Name": "From", "Value": "sam@abc.com"}, {"Name": "Name", "Value": "Test"}, {"Name": "SubjectContainsWords", "Value": "TEST23"}, {"Name": "StopProcessingRules", "Value": "True"}], "RecordType": 1, "ResultStatus": "True", "SessionId": "84579310-05ab-4d3f-bd58-c8ebbe43da2f", "UserId": "chris@abc.net", "UserKey": "1003200077814EEE", "UserType": 2, "Version": 1, "Workload": "Exchange"} Company domain : abc.com ; auto forward rules configured to send emails to "david@123.com" is suspicious and that has to be alerted. ( Any domains other than abc.com and abc.net are considered as external and has to be alerted)
I have a Query need to compare hourly log count of today with the average value of last 7 days, if the count is greater then it should trigger an alert
Hello,  I am trying to generate an alert based of response times. In a given timeframe, if the percentage of response times (resp_time > 8000ms) >2%, then alert.  SO, far i am using this and gettin... See more...
Hello,  I am trying to generate an alert based of response times. In a given timeframe, if the percentage of response times (resp_time > 8000ms) >2%, then alert.  SO, far i am using this and getting a null value   index=AlwaysUseIndex service_name=MyService request_host=something.com "parameter_check" | rex field=route "^(?<route>.*)\?" | eval pTime = response_time | stats count as Volume by route | appendcols [ search pTime > 8000 | stats count as Err by route ] | fields route,Volume,Err This is to test and once i get some output, i want to do below percentage calculation.  | eval Percentage=round((Volume/Total)*100,2) | table route, Total,Err,Percentage Final Output Route Total Err percentage           I tried a few other options as well, not sure if its a brain fade moment after working for 12 hours but i cannot seem to get the err count to dive by total and get percentages.  I am able to get percentages but not sure how to alert based on it.  This the query i used to get percentages: index=ALwaysUseIndex service_name=MyService request_host=something.com "parameter_check" | rex field=route "^(?<route>.*)\?" | eval pTime = total_time | eval TimeFrames = case(pTime<=1000, "0-1", pTime>1000 AND pTime<=3000, "1-3", pTime>3000 AND pTime<=5000, "3-5", pTime>5000 AND pTime<=8000, "5-8", pTime>8000, ">8", pTime>20000, "ReallyBad") | stats count as CallVolume by route, TimeFrames | eventstats sum(CallVolume) as Total by route | eval Percentage=round((CallVolume/Total)*100,2) | sort by route, -CallVolume | chart values(Percentage) over route by TimeFrames | sort -TimeFrames But i am not sure how to alert based off only the pTime >8000 and pTime >20000 Thanks  
Hello, I have multiple values for a field in my search results and they look like the ones below. Can you show me the condition to regex the second value so that it becomes "aks-nodes-15133334-vmss"... See more...
Hello, I have multiple values for a field in my search results and they look like the ones below. Can you show me the condition to regex the second value so that it becomes "aks-nodes-15133334-vmss" without the trailing numbers after vmss? At the same time, I would like to maintain value 1 as it is since it doesn't have "vmss" in the trailing name. So, if the value contains vmss, remove the trailing numbers but if there's no vmss in the value, let the value remains at it is. Before 1) aks-agentpool-60893500-2 2) aks-nodes-15133334-vmss000002 After 1) aks-agentpool-60893500-2 2) aks-nodes-15133334-vmss If I do this (?<host>.+\D), it will make item 2 looks like aks-nodes-15133334-vmss but it makes item 1 looks like aks-agentpool-60893500- Thank you for your help.
Hi,  I ran "splunk offline --enforce-counts" command on one of the indexer servers in a multisite cluster. it has been over a day. I would like to know the status or progress of my offline command. ... See more...
Hi,  I ran "splunk offline --enforce-counts" command on one of the indexer servers in a multisite cluster. it has been over a day. I would like to know the status or progress of my offline command. Is there a way to know status or progress?
I have a Bluecoat device i want to monitor that device logs using UF. after have opened port from bluecoat to a relay server(Windows server with UF installed). But Data is not getting forwarded to In... See more...
I have a Bluecoat device i want to monitor that device logs using UF. after have opened port from bluecoat to a relay server(Windows server with UF installed). But Data is not getting forwarded to Indexer..Please let me know if forwarding logs from network devices through tcp port using Splunk UF is possible.. If thats possible please suggest me some way to troubleshoot this issue(from Splunk UF data is not sent to Indexer)