All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to onboard McAfee EPO Cloud data. While there is an add-on available for on-prem solution of McAfee EPO, it does not support cloud as per documentation (or atleast it's not mentioned explicitl... See more...
I want to onboard McAfee EPO Cloud data. While there is an add-on available for on-prem solution of McAfee EPO, it does not support cloud as per documentation (or atleast it's not mentioned explicitly). Please guide me how can I proceed here.
We currently have Splunk Enterprise and ServiceNOW integrated via the Splunk Add-on for ServiceNOW. We have the ability to view a ServiceNOW Incident # in Splunk and have enabled a hyperlink for the ... See more...
We currently have Splunk Enterprise and ServiceNOW integrated via the Splunk Add-on for ServiceNOW. We have the ability to view a ServiceNOW Incident # in Splunk and have enabled a hyperlink for the ServiceNOW Incident # which navigates the users to the ServiceNOW login page. Is there a way to navigate the user directly to the ServiceNOW Incident itself and bypass the ServiceNOW login page?
Is it possible to share a sourcetype'd data between two apps? I have pfsense sending both firewall logs and Suricata eve json logs to the same UDP data input.  The TA-pfsense app is sourcetyping, fi... See more...
Is it possible to share a sourcetype'd data between two apps? I have pfsense sending both firewall logs and Suricata eve json logs to the same UDP data input.  The TA-pfsense app is sourcetyping, field extracting and indexing the various syslog types from pfsense.  The TA-Suricata has the props.conf, lookups, eventtypes etc to extract relevant data from the eve json logs mixed into this UDP data stream.  Is there a way to send the sourcetype:suricata from the TA-pfsense props to point to the TA-Suricata inputs.conf?
I would like to hear from other admins on how they are keeping up with high demand of data onboarding requests into their Splunk instance in large organizations. We are battling with more than 300 r... See more...
I would like to hear from other admins on how they are keeping up with high demand of data onboarding requests into their Splunk instance in large organizations. We are battling with more than 300 requests per month to onboard data into Splunk as every application in the organization wants to utilize Splunk for monitoring and the demand only keeps increasing. Most of these are custom application logs. The biggest bottleneck is  defining props (LINE BREAKER, TIME STAMP etc..,) for the source types by having to manually analyze each individual log. Other parts of the onboarding (inputs.conf, indexes.conf etc..,) can be easily automated for seamless onboarding but not props. Not defining props for source types to leave to Splunk defaults is not an option as we have seen some serious performance issues on indexers. I would like to hear from Splunk if there is a strategic direction in this regard to make admins life easier with respect to onboarding and other admins who might have dealt with similar situation and overcome in creative ways.  Regards, Pradeep  
Can some one please help me to change the background color of Table fieldname. By default I am getting the fieldname background color is GREY.    
TL:DR- How do I specify in the props.conf that for "pfsense:suricata" to then use Splunk's json extraction? Situation explained below: Hello All, so I've done some tweaking to make the TA-pfsense a... See more...
TL:DR- How do I specify in the props.conf that for "pfsense:suricata" to then use Splunk's json extraction? Situation explained below: Hello All, so I've done some tweaking to make the TA-pfsense add more sourcetypes effectively.  The app's author has a really neat transforms that looks into the syslog event and assign a sourcetype so that the appropriate props.conf stanza is then used to field extract.  This way you get good fields from an openvpn event, good fields from a firewall (filterlog) event etc. For Snort and then Suricata being hosted on pfsense, I was using the barnyard2 support and sending those logs over a different port, making the apps and sourcetyping easy. Now both Snort and Suricata have deprecated Barnyard2 support on pfsense.  Snort still supports Unified2 output, Suricata supporting eve json- over the same UDP data input that the TA-pfsense uses. Thanks to the TA-pfsense transforms I mentioned earlier, the data coming into that UDP feed gets sourcetyped as "pfsense:suricata" and I have a props.conf stanza for it with some rough regex to get fields like Classification, src_ip etc. For the question:  I selected the eve json format option for fun- you get lots more data like protocols used, data flows etc.  Really neat stuff.  ELK already has an amazing dashboard for this stuff.  How do I specify in the props.conf that for "pfsense:suricata" to then use Splunk's json extraction?
I have uploaded the log file containing the backdoor information above into splunk but i'm not sure how to create a search query to present it in my dashboard?  Do I need to extract any f... See more...
I have uploaded the log file containing the backdoor information above into splunk but i'm not sure how to create a search query to present it in my dashboard?  Do I need to extract any fields from this?
  I have uploaded the log file containing the virus information above into splunk but i'm not sure how to create a search query to present it in my dashboard? 
I've created a dropdown field for New User Accounts Created(Failed Attempts)   And this is the search query   This is the error I get in my dashboard panel when i select a value from my d... See more...
I've created a dropdown field for New User Accounts Created(Failed Attempts)   And this is the search query   This is the error I get in my dashboard panel when i select a value from my dropdown field             This is the search query i used for the above panel   Is there a way to fix this error?
Hi, We have recently started working on AppD for our micro-services architecture. We could successfully setup for all microservices and able to see the traffic and all sorts of dynamics. But for on... See more...
Hi, We have recently started working on AppD for our micro-services architecture. We could successfully setup for all microservices and able to see the traffic and all sorts of dynamics. But for one main Gateway API for all microservices, we are not able to see any traffic.  This Gateway API is built with .net core 2.2 and used Ocelot to reroute service calls to the microservices. There is no controllers written in Gateway. It is a plain micro-service which is used for authenticating calls and reroute to respective microservices using Ocelot. I want to know is there any specific kind of configuration for these type of micro-services to enable appD ? Please clarify us
Hey Guys, I am struggling arround a few days now, but I cant find a good/efficient solution for my problem. I want to check 3 different windows event-ids (for example 1,2 and 3), where 2 of them th... See more...
Hey Guys, I am struggling arround a few days now, but I cant find a good/efficient solution for my problem. I want to check 3 different windows event-ids (for example 1,2 and 3), where 2 of them the third precedes. This is no problem at all, but my scheduled search should look for event-id 3 within a timerange of 25 minutes.  The problem is now, that the preceding event-id (1,2) could occur within a timerange of 10h BEFORE the event-id 3. If there are not such preceding events, a alarm should be triggered. I could let the search run for the last 10 hours, but I think there will be many false alarms. In short: - check for event-id 3 within -20m@m and -1m@m - check for every found event-id 3 whether there are preceding event-ids 1 OR 2 within the last 10h At the moment I am doing so:       | tstats summariesonly=true allow_old_summaries=true count AS eventCount_3 from datamodel=Windows ... | join type=left user [| tstats summariesonly=true allow_old_summaries=true count AS eventCount_1 from datamodel=Windows ... | join type=left user [| tstats summariesonly=true allow_old_summaries=true count AS eventCount_2 from datamodel=Windows ... | eval goodAuth=if((eventCount_1>=1 OR eventCount_2>=1),1,0)       Unfortunately the "earliest" and "latest"-Statement will not work with "tstats summariesonly". I hope you understand my problem. Best Regards, Tim
Hi, { [-]    advisories: [ [+]    ]    number_of_device: 1    os_name: ios    os_version: 1234    status: checked } Above is my parent json And under advisories i have below json.  advisor... See more...
Hi, { [-]    advisories: [ [+]    ]    number_of_device: 1    os_name: ios    os_version: 1234    status: checked } Above is my parent json And under advisories i have below json.  advisories: [ [-]      { [-]        a_id: abcd1234        cv: [ [-]          random_number        ]        score: 6.5        www: [ [-]          www-12        ]        first_published: 2020-06-03T16:00:00        last_updated: 2020-06-08T20:41:10        ab_score: 2/4        summary:something   So here I want to count how many times the ab_score =2/4 and then get the corresponding score=6.5 for each os_version.  But when i am using spath and mvexpand i am getting 2/4 for all ab_score and all a_id.  not understanding whats happening. Ideally in the raw data 2/4 is there in only 4 places with 4 ab_score attached to it. But i am receiving more than that and repeated .   Please help. @kamlesh_vaghela
I am getting an error when trying to push my application from the Deployment server to the Cluster Master.  This is the same for all the different apps that are listed in the deployment-apps director... See more...
I am getting an error when trying to push my application from the Deployment server to the Cluster Master.  This is the same for all the different apps that are listed in the deployment-apps directory.  This has worked many times, but just started this issue.  It was once fixed with the _internal being put in the _cluster/local/indexes.conf, but that is still there an I still get the issue. Failed to load app. Application=duke_nix_forwarder_inputs cannot be loaded, as path=/web/splunk/etc/deployment-apps/duke_nix_forwarder_inputs does not exist
Hello, Hoping for some help, I have a simple Dashboard that allows a user to select  specific geography to return results for that geo. The search is based off of a SQL query and uses tokens to dete... See more...
Hello, Hoping for some help, I have a simple Dashboard that allows a user to select  specific geography to return results for that geo. The search is based off of a SQL query and uses tokens to determine the correct geography. The report has been working perfectly for a long time, recently we had an upgrade to our application that added JSON columns to our DB and moved some of the fields I was returning into the JSON clob data field. I updated the query to extract the necessary JSON fields, which works, however, now after adding the JSON details to the query, the token is no longer working. I keep getting "Search is Waiting for Input". I have tried to move the connection argument, before and after the SQL query, see below, neither seemed to work. | dbxquery query="blah blah" connection=geography | dbxquery connection=geography query="blah blah" Hoping someone can point out what I need to add or update, so that the token is recognized/working again. Thank you in advance for any assistance.
I have followed module 4 instructions twice (once I manually found the data add section) to upload data. All data uploads successfully without any errors but there is nothing there upon performing a... See more...
I have followed module 4 instructions twice (once I manually found the data add section) to upload data. All data uploads successfully without any errors but there is nothing there upon performing a search. Please advise. I am using the splunk cloud version.  I have tried the drag and drop and browse options for the file.  Both upload and are submitted successfully but nothing exists in the index.
I need assistance building a search that looks back in time 5 minutes to check and see if fields are present.  If so I do not need it to return any results.  This is correlating two different securit... See more...
I need assistance building a search that looks back in time 5 minutes to check and see if fields are present.  If so I do not need it to return any results.  This is correlating two different security logs Example: sourcetype=a field=1 field=2 field=3 is used to look back 5 minutes against  sourcetype=b field=1 field=2 field=3 If there is a match, return no results.  If no match, return sourcetype=a field=1 field=2 field=3 results. Any assistance would be appreciated.
Hey, I am using splunk 6.x and on another system splunk 8.x with similar data backends.    when I do a search for: index=myIndex earliest=-30  |head 10 it works on 6.x but not 8.x   though when... See more...
Hey, I am using splunk 6.x and on another system splunk 8.x with similar data backends.    when I do a search for: index=myIndex earliest=-30  |head 10 it works on 6.x but not 8.x   though when i use the time picker
I am trying to write a search that will update a lookup asset table, with an additional table column metric (weight1). However, I want to be able to append the asset column, without the 2nd column be... See more...
I am trying to write a search that will update a lookup asset table, with an additional table column metric (weight1). However, I want to be able to append the asset column, without the 2nd column being appended. Is this possible?  example:     index=* host=* | table host weight1| dedup host | rename host AS asset| outputlookup append=false asset_score.csv      This will run as a saved search to update the lookup table periodically. However, if I modify the "weight1" column values in lookupeditor, the changes get wiped out whenever the above saved search runs.  Any suggestions? 
Hello, I've got a 100% Windows environment with a deployment server and I'm trying to configure server classes so we can automatically distribute config to servers based on their environment/roles. ... See more...
Hello, I've got a 100% Windows environment with a deployment server and I'm trying to configure server classes so we can automatically distribute config to servers based on their environment/roles. Using just name regex matches (whitelist.0 = regex) isn't going to work well with how random the names are, so I'd like to use "whitelist.from_pathname" instead with a csv or text file fed from one of my automation servers, but I just can't get it to work or the UI is lying to me. I tried csv files first, two columns (ComputerName,EnvironmentName), using select_field, where_field and where_equals to filter by EnvironmentName and that didn't seem to work. Then I tried just a plain text file list (one server name per line) and that didn't work. I thought maybe it didn't like absolute paths (D:\) so I tried a relative path "etc\deployment-apps\DevServerList.txt" and that didn't work either. So I tried forward slashes. I've been restarting splunkd in between edits. This is basically what the CSV stanza looked like: [serverClass:DevelopmentServers] whitelist.select_field = ComputerName whitelist.from_pathname = D:\Automation\ServerEnvironmentList.csv whitelist.where_field = EnvironmentName whitelist.where_equals = Development* When I inspect clients in Forwarder Management > Clients they're all showing nothing for Server Classes, but the moment I switch the serverClass back to whitelist.0 with some sample names they start showing up. What am I missing?
Hi, I have gone thru multiple answers and also splunk documentation about migrating from standalone search head to SHC but my usecase is bit different. Usecase: We want to deploy splunk enterprise... See more...
Hi, I have gone thru multiple answers and also splunk documentation about migrating from standalone search head to SHC but my usecase is bit different. Usecase: We want to deploy splunk enterprise service in AWS and as part of it, we create a SHC with say 5 search heads. Upon requirement of OS upgrade or splunk vesion upgrade, we want to spawn 5 totally new EC2 instances to form new SHC with new AMI that has the upgrades.  How do we copy old SHC data/settings(search artifacts - dashboards, saved searches etc) to the new one? What is the best way to achieve this?