All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am currently working on a search that is supposed to tell me whether users went the prescribed CyberARK route or bypassed it for system access. So theoretically I should use for events 4624... See more...
Hi, I am currently working on a search that is supposed to tell me whether users went the prescribed CyberARK route or bypassed it for system access. So theoretically I should use for events 4624 and 4648 and see whether the connctions come from CyberARK or not. But I found plenty of login events from the Citrix servers where our users do their work. Following up on this it turns out, that users on Citrix use a web browser to access an application on the target system that uses SSO for the user login. This also shows up as 4624. Which for my purpose would be a false positive. Looking closer that the generated 4624 events, the key difference is the LogonProcessName and AuthenticationPackageName in the event. If AuthenticationPackageName=NTLM or LogonProcessName=NtLmSsp, then this seems to indicate a SSO login. And AuthenticationPackageName=Kerberos or LogonProcessName=Kerberos seem to be indicators of an RDP session (via CyberARK). Excluding the NtLm events seems to be the way to go, but as my Windows background is pracitcally NIL after years of AIX/Linux I wonder wheter someone could confirm my hypothesis. Unfortunately I do not have a lab for checking this with a control case. thx afx
Hello All, I have configured an alert with earliest=-24h and head 3000 and i can see from search there are lot of results are populating but I am no alerts are getting generated. Alert thresho... See more...
Hello All, I have configured an alert with earliest=-24h and head 3000 and i can see from search there are lot of results are populating but I am no alerts are getting generated. Alert threshold is greater than 2 and results populating are 77 I have integrated the alert with splunk. At first I thought it might the integration is broken but I am verifying from here activity->triggered alerts but i do not see anything https://share.getcloudapp.com/kpuYKLmd I am not sure if this due to the cron and other settings, so here it is https://share.getcloudapp.com/o0uD6gyX
I am collecting Sysmon logs via Splunk UF in XML format (renderXml=true). I need to forward some specific Sysmon events to QRadar without XML formatting. I would like to keep sending all Sysmon event... See more...
I am collecting Sysmon logs via Splunk UF in XML format (renderXml=true). I need to forward some specific Sysmon events to QRadar without XML formatting. I would like to keep sending all Sysmon events in XML format to Splunk. I tried to make two different stanzas in inputs.conf trying to ingest the same log in two different ways but it does not seem to work. It looks like Splunk merge these two together in runtime. The idea was to filter non-XML events on HF by using props.conf, transforms.conf and _SYSLOG_ROUTING to send it to QRadar. [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = 0 renderXml = true index = sysmon [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = 0 renderXml = false index = sysmon whitelist = 1,22
How do we map same field from CIM Mapping from different model? -- Example.. from same sourcetype data is coming field1 -- Map to Inventory model 'dest' field field2-- Map to Alert model 'dest' f... See more...
How do we map same field from CIM Mapping from different model? -- Example.. from same sourcetype data is coming field1 -- Map to Inventory model 'dest' field field2-- Map to Alert model 'dest' field
While I was checking the SEP 14 Phantom app, 'test connectivity' was working fine, but when it comes to 'Scan endpoint' action even if I have full access it showing as "API failed. Status code: 401 D... See more...
While I was checking the SEP 14 Phantom app, 'test connectivity' was working fine, but when it comes to 'Scan endpoint' action even if I have full access it showing as "API failed. Status code: 401 Detail: You do not have permission to retrieve the list of domains." Can anyone help me to resolve the issue?
I have a bash script that queries audit.log using ausearch for events that I have configured in audit.rules to have attached a specific key. This is the general idea of the script: # Assign pat... See more...
I have a bash script that queries audit.log using ausearch for events that I have configured in audit.rules to have attached a specific key. This is the general idea of the script: # Assign path variables # Capture saved timestamp from last execution # Save new timestamp for future execution # Execute query using ausearch # Redirect stdout and stderr to two different variables # Check stderr variable does not equal "<no matches>" and exit execution if true Now this is where I have tried multiple things and while all of them work when executed from a terminal, they don't generate any results when Splunk executes them. echo $stdout_var OR echo $stdout_var > /path/to/tmp cat /path/to/tmp I have even tried monitoring "/path/to/tmp", that's when I realized this might be a user permissions issue since the file is generated, but there is never any content in it. Currently, SPLUNK_OS_USER=root, but does that mean that the script is executed as SPLUNK_OS_USER? Or do I have to configure the script through Splunk to run as a specific user? Again, when I execute this command manually from the CLI as root, it works exactly as expected, but it generates nothing when executed through the scripted input. EDIT: So I continue to debug to find the issue. 1. Script is being executed as root (placed "echo $UID" at top of script, which showed on Splunk Web as an event that simply returned 0) 2. I have added "echo" commands at every step of the execution, and I have found that it actually keeps exiting execution at the stderr variable check. This makes no sense, because when I run exactly the same command with the same timestamp, on the command line it works as expected, but apparently when it is executed by Splunk as a scripted input, ausearch returns nothing. I know this is starting look like a bash script question, but from a Linux standpoint, the script works as it should. I don't know what else to do at this point to make it work through Splunk.
Anyone know of a way to only return the matching values of a sub search to the string array field in the parent search? index="email" sourcetype="email_links" [ search index="sinkholed" sour... See more...
Anyone know of a way to only return the matching values of a sub search to the string array field in the parent search? index="email" sourcetype="email_links" [ search index="sinkholed" sourcetype="bad_http" | rename raw_host as "extracted_host{}" | fields "extracted_host{}" ] | stats dc("rcptto{}") as recipient_dc values("rcptto{}") values("extracted_host{}") values(subject) by from | sort recipient_dc The query works fine except I'm getting back more than I want. The results I get back in the "extracted_host{}" field are everything in that particular field value array instead of just the matching criteria. For example, in the sub-search let's say there is a sinkhole domain called baddomain.com. The results I see in "extracted_host{}" are: baddomain.com www.w3.org abc123advertisement.com etcetcetc.com Would like to only return what matched in the sub-search. Any assistance is greatly appreciated.
| makeresults | eval _raw="Source1_field2,Count dev,6 prod,5 uat,7 qa,8" | multikv forceheader=1 | table Source1_field2,Count | rename COMMENT as "this is sample your stats output" | transpose 0 hea... See more...
| makeresults | eval _raw="Source1_field2,Count dev,6 prod,5 uat,7 qa,8" | multikv forceheader=1 | table Source1_field2,Count | rename COMMENT as "this is sample your stats output" | transpose 0 header_field=Source1_field2 | eval "prod + uat"=prod+uat | fields - prod uat | transpose 0 column_name="Source1_field2" header_field=column this code works. but code sample add extra space. copy and paste to search , this is not works. What should I do? Source1_field2 Count dev 6 qa 8 prod + uat 12 This is correct result.
All, I was reusing the Modal Window project from Ian Gillespie as described in the Hurricane Labs Tutorial Project . This project shows a TABLE in the Modal Window , I would like to have diffe... See more...
All, I was reusing the Modal Window project from Ian Gillespie as described in the Hurricane Labs Tutorial Project . This project shows a TABLE in the Modal Window , I would like to have different Visualizations like Chart, Single View Panels etc. Instead of using the TableView, I tried changing it to ChartView , ChartElement etc, but I am not able to make it work. I still get the output as a Table in the Modal Window. Could someone teach me to do that ? It would be really helpful if an example is given on Single View apart from ChartView as well. Dashboard Code: <dashboard script="modalviewsearchapp1.js"> <label>Modal Demo</label> <row> <panel> <table id="master"> <title>Master</title> <search> <query>index=_internal | stats count by sourcetype</query> <earliest>-60m@m</earliest> <latest>now</latest> </search> <!-- Set the type of of drilldown, since we will always consume the same field, use row--> <option name="drilldown">row</option> </table> </panel> </row> <row> <panel> <table id="slave"> <title>slave</title> <search> <query>index=_internal | dedup group | table group</query> <earliest>-60m@m</earliest> <latest>now</latest> </search> <!-- Set the type of of drilldown, since we will always consume the same field, use row--> <option name="drilldown">row</option> </table> </panel> </row> </dashboard> Script - "modalviewsearchapp1.js" require([ 'underscore', 'backbone', '../app/search/components/ModalViews', 'splunkjs/mvc', 'splunkjs/mvc/searchmanager', 'splunkjs/mvc/simplexml/ready!' ], function(_, Backbone, ModalView, mvc, SearchManager) { var master = mvc.Components.get("master"); var tokens = mvc.Components.getInstance("submitted"); var slave = mvc.Components.get("slave"); var detailSearch = new SearchManager({ id: "detailSearch", earliest_time: "-24h@h", latest_time: "now", preview: true, cache: false, search: "index=_internal sourcetype=$sourcetype$ | timechart count" }, {tokens: true, tokenNamespace: "submitted"}); var detailedSearch = new SearchManager({ id: "detailedSearch", earliest_time: "-24h@h", latest_time: "now", preview: true, cache: false, search: "index=_internal group=$group$ | chart count by sourcetype" }, {tokens: true, tokenNamespace: "submitted"}); master.on("click", function(e) { e.preventDefault(); if(e.field === "sourcetype") { var _title = e.data['click.value']; tokens.set('sourcetype', _title); var modal = new ModalView({ title: _title, search: detailSearch }); modal.show(); } }); slave.on("click", function(e) { e.preventDefault(); if(e.field === "group") { var _title = e.data['click.value']; tokens.set('group', _title); var modal = new ModalView({ title: _title, search: detailedSearch }); modal.show(); } }); }); Script - ModalViews define([ 'underscore', 'backbone', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/searchmanager', 'splunkjs/mvc/simplexml/element/table', 'splunkjs/mvc/chartview', 'splunkjs/mvc/simplexml/element/chart', 'splunkjs/ready!' ], function(_, Backbone, $, mvc, SearchManager, ChartElement) { var modalTemplate = "<div id=\"pivotModal\" class=\"modal\">" + "<div class=\"modal-header\"><h3><%- title %></h3><button class=\"close\">Close</button></div>" + "<div class=\"modal-body\"></div>" + "<div class=\"modal-footer\"></div>" + "</div>" + "<div class=\"modal-backdrop\"></div>"; var ModalView = Backbone.View.extend({ defaults: { title: 'Not set' }, initialize: function(options) { this.options = options; this.options = _.extend({}, this.defaults, this.options); this.childViews = []; console.log('Hello from the modal window: ', this.options.title); this.template = _.template(modalTemplate); }, events: { 'click .close': 'close', 'click .modal-backdrop': 'close' }, render: function() { var data = { title : this.options.title }; this.$el.html(this.template(data)); return this; }, show: function() { $(document.body).append(this.render().el); $(this.el).find('.modal-body').append('<div id="modalVizualization"/>'); $(this.el).find('.modal').css({ width:'90%', height:'auto', left: '5%', 'margin-left': '0', 'max-height':'100%' }); var search = mvc.Components.get(this.options.search.id); var detailTable = new ChartElement({ id: "detailTable", 'charting.chart': 'pie' managerid: search.name, el: $('#modalVizualization') }).render(); this.childViews.push(detailTable); search.startSearch(); }, close: function() { this.unbind(); this.remove(); _.each(this.childViews, function(childView) { childView.unbind(); childView.remove(); }); } }); return ModalView; });
Hi, in my organization we have full automated Splunk Enterprise environment. Automated except one thing - apps in search head. I would like to kindly ask You for some advice's, how to organize ... See more...
Hi, in my organization we have full automated Splunk Enterprise environment. Automated except one thing - apps in search head. I would like to kindly ask You for some advice's, how to organize Search Head environment for multiple users. In our environment we have 50% Splunk advanced users (apps developers) and 50% dashboard readers. At the moment everyone using Search App as default one - each user storing there all searches. How to backup each user environment ? On many YouTube Splunk movies I saw that many people creating Splunk App and storing there all informations. Then app is located in HOME_SPLUNK/etc/apps/user-appname and could be easily managed by system administrator (tar,backup,restore). The problem is that only admin can create apps. Is there any way to store users search environments in some containers/folders which administrator can easily manipulate. I would like to store Splunk Apps in git repo and user search enviornments somewhere in backup. So if i will do search head deployment, my script will install list of current apps and restore last backup of user searches. How to manage it in the best way ? I will appreciate it any support. Thanks in advance
We use Splunk to report on daily smartGrid meter data. We use 1 indexer, 1 searchhead and 1 heavy forwarder We have observed that since the upgrade to 7.3.3 in december 2019 the results of schedule... See more...
We use Splunk to report on daily smartGrid meter data. We use 1 indexer, 1 searchhead and 1 heavy forwarder We have observed that since the upgrade to 7.3.3 in december 2019 the results of scheduled searches no-longer contain all The expected fields/Field values. When running these same searches manually we do see all the fields and field values. These Searches are used in population of kvstore lookup tables which do not get populated properly. The search ends with a collect to a kvstore and returns around 3mln records. Allthough our search query and volumes are large (3M records) according to the search logs there are no errors and they complete succesfully.
I am wondering if anyone has come accross this issue before: System and application versions: • Docker version 18.09.4 • Splunk version 7.2.6 (?) • Windows Server 2019 1809 Build A summary of ... See more...
I am wondering if anyone has come accross this issue before: System and application versions: • Docker version 18.09.4 • Splunk version 7.2.6 (?) • Windows Server 2019 1809 Build A summary of what we’ve discovered and background information: • Splunk seems to prevent docker from starting docker containers, they are stuck in a “Created” state • We do not use Splunk explicitly as our docker logging service, i.e. Splunk is not referenced in any docker config • Docker and the SplunkForwarder service both start up on host boot • Changing the dependencies on the service (i.e. docker start first or splunk start first) doesn’t fix the issue • Stopping splunk while docker is running and then creating the containers works o As soon as one container has started successfully, we can start splunk and still create more containers • Restarting splunk while docker is running and then creating the containers does not work Steps to reproduce on a machine: 1. Boot server up, docker and splunk start automatically 2. Attempt to run docker-compose to create our containers with no containers already running or in an exited state, docker gets stuck with containers in a “Created” state Steps to mitigate issue: 1. When there are no containers running, stop the splunk service 2. Run docker-compose to create at least one container successfully 3. Start the splunk service 4. Run docker-compose to bring up any remaining containers Any help or ideas to get a work around would be appreciated TIA
Hi team! If I want to update 6.5.2 to 7.3.4 can I do it directly? Or I have to do an extra step? Exists a path route? I have 1 Cluster + Deployer | 2 indexers | 3 search head | 1 HF Thank you!
I have found two apps this one and this one, but the first one only pulls security alerts and for the other one you need to deploy the app to the servers. Thing is, we also need the clients info and ... See more...
I have found two apps this one and this one, but the first one only pulls security alerts and for the other one you need to deploy the app to the servers. Thing is, we also need the clients info and they don't have forwarders installed. Is there an app that pulls all windows defender logs from Azure?
Hi, I want to create my own dashboard, a customised version of the ones in the app but when I try to use the built in searches outside of the app they don't work & when I use the clone option in t... See more...
Hi, I want to create my own dashboard, a customised version of the ones in the app but when I try to use the built in searches outside of the app they don't work & when I use the clone option in the app there doesn't seem to be any way to add my new dashboard to the menu system in the app? So for instance under Operations there would be the normal ones and my new one? I'm pretty new to this, am I missing something obvious? Thanks for any help.
Hello, In order to make syslog communication through TLS work, I followed this procedure (https://docs.splunk.com/Documentation/Splunk/8.0.2/Security/Howtoself-signcertificates) on one node. I ba... See more...
Hello, In order to make syslog communication through TLS work, I followed this procedure (https://docs.splunk.com/Documentation/Splunk/8.0.2/Security/Howtoself-signcertificates) on one node. I backed up the original cacert.pem and copy the new root certificate just created to $SPLUNK_HOME/etc/auth/cacert.pem I also copied the server certificate to $SPLUNK_HOME/etc/auth/server.pem and change the configuration in files $SPLUNK_HOME/etc/apps/launcher/local/inputs.conf and $SPLUNK_HOME/etc/system/local/server.conf Since then, I have the error log : ERROR LMTracker - failed to send rows, reason='Unable to connect to license master=https://xxx:8089 Error connecting: SSL not configured on client' (xxx correspond to the license master server) So I tried to restore original cacert.pem and server.pem but i still get the error. I tried to connect to the license master through TLS with curl but I get an error (Peer's Certificate has expired) I checked the license master certificate and it appears to be expired since one month. But license verification is working from other Splunk nodes (on which I did not change root certificate) and curl too. Also I am not able to renew this certificate as it is sign by the default root CA and I do not have the passphrase of the private key. The connection to the web interface of this node does not work, I get an internal server error. Could you please help me to figure out what is blocking the license verification? Do not hesitate to tell me if you need more details. Thanks in advance and have a nice day!
Hello, this is my query | loadjob savedsearch="myquery" | where (strftime(_time, "%Y-%m-%d") >= "2020-02-26") AND (strftime(_time, "%Y-%m-%d") <= "2020-03-03") and STEP=="Click" | bucket _time s... See more...
Hello, this is my query | loadjob savedsearch="myquery" | where (strftime(_time, "%Y-%m-%d") >= "2020-02-26") AND (strftime(_time, "%Y-%m-%d") <= "2020-03-03") and STEP=="Click" | bucket _time span=1d |table _time,MESSAGE |where MESSAGE = "337668c2-162c-4f4f-bda9-92f7816f2752" OR MESSAGE = "46095117-4dcb-4ebc-9906-8c23f1a1a26b" OR MESSAGE = "60eb62a4-c54a-4fc0-9aaa-17726ff62929" OR MESSAGE = "8b5e055c-17ab-4135-8b90-1fbc65032792" And this is the result What i want is only the lines on yellow: If I have a message on the 26th, 27th and 28th I must have that of 26
Hello, I'm trying to integrate Splunk MINT with JIRA Cloud instance, however, whatever I tried I keep getting this error "Wrong credentials. Please try again" I have read the documentation thor... See more...
Hello, I'm trying to integrate Splunk MINT with JIRA Cloud instance, however, whatever I tried I keep getting this error "Wrong credentials. Please try again" I have read the documentation thoroughly. Link: https://docs.splunk.com/Documentation/MINTMgmtConsole/1.0/UserGuide/Integratewithdevelopertools#JIRA I have the following observations: 1 - The documentation is not very clear whether it supports JIRA Cloud instances or not. 2 - The documentation is not clear whether user emails (the main form of login to JIRA cloud) can be used instead of usernames. It's worth noting that JIRA Cloud no longer has usernames for users. Link: https://confluence.atlassian.com/cloud/blog/2018/06/say-goodbye-to-usernames-in-atlassian-cloud 3 - There is also now API tokens that can be generated on JIRA Link: https://confluence.atlassian.com/cloud/api-tokens-938839638.html 4 - The documentation is also not clear on what the format of the JIRA URL should be. i.e. should it be "https://myinstance.atlassian.net/" or "https://myinstance.atlassian.net/jira" or sth else. Please help me connect Splunk MINT to our JIRA cloud instance.
This question may not 100% related with Splunk but I am sure Splunker had done this many times so I thought I will just ask I want to identify the real destination when user logon a host using aut... See more...
This question may not 100% related with Splunk but I am sure Splunker had done this many times so I thought I will just ask I want to identify the real destination when user logon a host using authenticate through DC like Kerobers or NTLM. I looked at event 4624, 4768,4771 on the DC log, they only have real src information, but I cant find the real dest information in these event. Is there another event I should look at or it is some field is missing on these events? my example as below user A using host A to logon to host C by go through DC B. and I only collect log at DC B, so I want to know how to identify the host C information from the log in this scenario. Thank you in advanced.
Hi Guys, There is a csv which gets updated every day once with details such as- VMName Group CPU Memory Storage PowerState I need to add column as "Any Changes" with value Yes or No. So t... See more...
Hi Guys, There is a csv which gets updated every day once with details such as- VMName Group CPU Memory Storage PowerState I need to add column as "Any Changes" with value Yes or No. So that if there is change in values for particular host it should show as yes in "Any Changes" column- VMName Group CPU Memory Storage PowerState AnyChanges Note:- This needs to be checked for month,, if there are any changes it should be highlighted in column "Any Changes" Please let me know how this can be implemented,.