All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have the following search:     index=sandbox document_type=test-collat-record-json_v2 | where ((isnotnull(test_result)) AND project_short="LNL" AND collateral_type="fw" AND ingredient_type="... See more...
I have the following search:     index=sandbox document_type=test-collat-record-json_v2 | where ((isnotnull(test_result)) AND project_short="LNL" AND collateral_type="fw" AND ingredient_type="ifwi_bin" AND ingredient="csme") | dedup test_collat_record_json_guid | join type=inner left=L right=R where L.project_short=R.project_short L.collateral_type=R.collateral_type L.ingredient_type=R.ingredient_type L.ingredient=R.ingredient [search document_type=test-collat-record-summary-json] | table L.collat_record_json_guid, L.project_short, L.collateral_type, L.ingredient_type, L.ingredient, L.version, L.test, L.test_result, R.number_of_tests, R.passing_threshold     I'm joining data from a set of test results and then I lookup info about what a passing set of results should look like from another data source. Hence the join. It's good. It works for me and the result yields the table: So great. Just want to aggregate the results and get counts of passing/failing tests and compare that with the passing_threshold field. So I added:      | stats count(eval(L.test_result=="SUCCESS")) as passingTests count(eval(L.test_result=="FAILURE")) as failingTests values(R.number_of_tests) as numTests, values(R.passing_threshold) as pass_threshold by L.collat_record_json_guid      But the two evaluations of success and failure tests are zero. But from the table above they are clearly not zero. Should be 2 and 1 respectively. What have I done wrong? Is eval not going to work on joined data? I am using the correct aliases for the data.
Hi I'm trying to save the results of 2 queries on a dash to a token and then add them up into a 3rd query. I'm running 8.2 and and it's beginning to look like all this only became available in 9 ... See more...
Hi I'm trying to save the results of 2 queries on a dash to a token and then add them up into a 3rd query. I'm running 8.2 and and it's beginning to look like all this only became available in 9 ? https://docs.splunk.com/Documentation/Splunk/9.0.0/DashStudio/searchTokens I dont see the button described here In the Edit Data Source panel, check the box for Use search results or job status as tokens.  I tried some of the stuff like job.resultCount etc...but cannot get anything to interpolate. Am I totally out of luck for v8.2 ?
Hello   We recently upgraded Splunk to v8.2.6.  This broke the Demisto integration so I upgraded the app to v4.0. This still has not fixed the issue. The alert will trigger but it does not send t... See more...
Hello   We recently upgraded Splunk to v8.2.6.  This broke the Demisto integration so I upgraded the app to v4.0. This still has not fixed the issue. The alert will trigger but it does not send to Demisto. Looking in the demisto.log file I found this:       2022-08-08 10:25:42,792 - DEMISTOALERT - INFO - In Main Method 2022-08-08 10:25:42,801 - DEMISTOALERT - ERROR - Error in main, error: name 'basestring' is not defined Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-Demisto/bin/demisto_send_alert.py", line 126, in <module> modaction = DemistoAction(sys.stdin.read(), modular_action_logger, 'demisto') File "/opt/splunk/etc/apps/TA-Demisto/bin/lib/cim_actions.py", line 136, in __init__ if isinstance(self.sid, basestring) and 'scheduler' in self.sid: NameError: name 'basestring' is not defined         I went to the configuration page to update configurations but it gives me an error screen:   Anyone have any ideas? Thnaks!
I created savedsearches.conf file to create a splunk alert and restart the splunk service, but I still can't see the new alert in the UI, I am using the following configuration: Thanks in adva... See more...
I created savedsearches.conf file to create a splunk alert and restart the splunk service, but I still can't see the new alert in the UI, I am using the following configuration: Thanks in advance!
I am creating a dashboard to show any new logs that are added to our environment within a period of time. For example - if we started ingesting AWS logs and Azure logs 2 days ago is there a way I ... See more...
I am creating a dashboard to show any new logs that are added to our environment within a period of time. For example - if we started ingesting AWS logs and Azure logs 2 days ago is there a way I can create a dashboard that shows these 2 new ingestions?  I am having trouble making a search query that can display a new value with the name of the recently added index added to the environment. Does anyone have any suggestions on how to solve this? Thanks.
Hello, i have a big doubt about the RF behavior about single and multi site cluster. When a single site is used an hypothetical configuration: Replication Factor=2 is quite easy i have two copies... See more...
Hello, i have a big doubt about the RF behavior about single and multi site cluster. When a single site is used an hypothetical configuration: Replication Factor=2 is quite easy i have two copies of the same data in the site (originating + copy). And only one peer can goes down In a multi site (example two sites) if i understood, with:  -  site_replication_factor = origin:1,site1:1,site2:1,total:2 - there are two copies (originating site=1 other site=1). Only one peer can be down, is it in total or one at site ? -   site_replication_factor = origin:2,site1:1,site2:1,total:3 - there are three copies (originating site=2 other site=1)  Only two peer scan be down, is it in total or two at site ? Using   site_replication_factor = origin:1,site1:1,site2:1,total:2 means that if i loss the peer in originating site the SHs redirect query to the second site (SF=2) ? Thanks  
I am trying to club data from one source type with a search input from a formatted CSV file, however I can send only one value as the input for the search. My requirement is that with that input val... See more...
I am trying to club data from one source type with a search input from a formatted CSV file, however I can send only one value as the input for the search. My requirement is that with that input value I want to send 2 or 3 related fields for the final output.   index=cdr source=* sourcetype=cdr globalCallId_ClusterID=main destDeviceName IN ( [ |inputlookup Wireless.csv |rex field=USERID "(?<USERID>\w{6})$" | eval destDeviceName="ABC" + 'USERID' + "*" | table destDeviceName | mvcombine destDeviceName | nomv destDeviceName | return $destDeviceName]) | table globalCallId_ClusterID globalCallID_callId callingPartyNumber originalCalledPartyNumber origDeviceName destDeviceName DateTimeOrigination DisconnectTime duration     The above query gives me a user with specific values which will match the input for destDeviceName , however when that is formatted in table i want to add additional fields to the tabel that corresponds to the input look up file.  
If we are running Splunk UBA in cluster with three nodes, is there a way that we can cut off one node and push all the running tasks into the other two nodes, or even one node?
Hi All, According to the documentation for Splunk Cloud Classic Experience  If your Splunk Cloud Platform deployment is on Classic Experience, you can manage your indexes programmatically using... See more...
Hi All, According to the documentation for Splunk Cloud Classic Experience  If your Splunk Cloud Platform deployment is on Classic Experience, you can manage your indexes programmatically using the Splunk REST API cluster_blaster_indexes/sh_indexes_manager endpoint. Manage indexes on Splunk Cloud Platform Classic Experience - Splunk Documentation When I used the command:  curl -k -H "Authorization: Bearer MyToken" https://MySplunk.splunkcloud.com:8089/services/cluster_blaster_indexes/sh_indexes_manager?output_mode=json   I get this respond: <!doctype html><html><head><meta http-equiv="content-type" content="text/html; charset=UTF-8"><meta http-equiv="refresh" content="1;url=https://XX.splunkcloud.com/en-US/servicesNS/nobody/search/data/indexes"><title>303 See Other</title></head><body><h1>See Other</h1><p>The resource has moved temporarily <a href="https://XX.splunkcloud.com/en-US/servicesNS/nobody/search/data/indexes">here</a>.</p></body></html> This brings me to a 404 page  Basically I want to create an Index using REST API on Splunk Cloud (Classic Experience)
Hi Everyone, we have another internal team that is trying to use the API to return some data we built for them. Unfortunately, they aren't able to get the payload but only the headers. Can someone su... See more...
Hi Everyone, we have another internal team that is trying to use the API to return some data we built for them. Unfortunately, they aren't able to get the payload but only the headers. Can someone suggest a solution or what we are doing wrong? the below is the response from the splunk API on their call.   Target: https://SomeHost:Port/servicesNS/user/search/search/jobs/export  Request body:         search=search inputlookup somefile.csv | table Day User emp_id Data           Response:         <results preview='0'> <meta> <fieldOrder> <field>Day</field> <field>User</field> <field>emp_id</field> <field>Data</field> </fieldOrder> </meta> </results>            
would like to reduce the Log data size in index by cut field which are not useful for the use case .  Before cut fields  would like check the utilization of  field, whether fields are used in any d... See more...
would like to reduce the Log data size in index by cut field which are not useful for the use case .  Before cut fields  would like check the utilization of  field, whether fields are used in any dashboard or searches by any other users.
We would like to patch up the OS and would like to know what are the dependencies on RHEL 8 OS does Splunk has. Thanks.
I want to do a field extraction for my sourcetype under the Fields-> Calculated Fields section. Confused how to draft the if condition to achieve the following logic. Condition. Some events conta... See more...
I want to do a field extraction for my sourcetype under the Fields-> Calculated Fields section. Confused how to draft the if condition to achieve the following logic. Condition. Some events contain  only the userid field, for those, check if it is not null/empty, then fetch the userid field as user or fill unknown Some events contain both userid and cmdid field, in this case (if the event has both these fields) cmdid is the real user field. so the logic in both cases should first compare the existence of those 2 fields and then accordingly derive.
Hi, While running the command "mvn clean install", I am getting this error (screenshot attached)while configuring with process monitoring extension. I have configured all the dependencies which is r... See more...
Hi, While running the command "mvn clean install", I am getting this error (screenshot attached)while configuring with process monitoring extension. I have configured all the dependencies which is required(java version 8 & Apache maven 3.8.6). Kindly help me with this Thank you in advance
Hi Folks, I am trying to understand the triggering option for existing anomalies models available in UEBA. For that i have tried to clone the existing model to understand the trigger option but not... See more...
Hi Folks, I am trying to understand the triggering option for existing anomalies models available in UEBA. For that i have tried to clone the existing model to understand the trigger option but not all the models showing up under the clone page. Could you please let me know, is there any option to identify the trigger option for existing data model.
Right now everyone must have been facing the same issue regarding with the removal of basic authentication from Microsoft.  We are using  Splunk Add-on for Microsoft Office 365 for ingesting logs ... See more...
Right now everyone must have been facing the same issue regarding with the removal of basic authentication from Microsoft.  We are using  Splunk Add-on for Microsoft Office 365 for ingesting logs of service status, service messages, and management activity logs from the Office 365 Management API.  Due to this removal of basic authentication, we may not use this add-on. We are searching for other wau how to ingest logs from O365. Do anyone have any idea how to ingest these logs to splunk other than this add-on ? Please kindly help us in this issue. Thanks in advance.!
Hi Are there any Splunk APP that provides dashboards for AWS WAF & IPS ? we didn't find any in Splunk App for AWS Security Dashboards
While configuring IAM role in splunk aws addon I am getting error-->In handler 'splunk_ta_aws_iam_roles': bad character (52) in reply size
I am creating a new file in the /var/log directory but when I sure for events I get zero result. How do I get Splunk to pick up the file so I can view it in the UI?
Dear forum, I'm trying to test my "Delegation" panel from the logbinder app but without success. I have results in the eventviewer file but in the dahsboard it appears as "no results found", as i... See more...
Dear forum, I'm trying to test my "Delegation" panel from the logbinder app but without success. I have results in the eventviewer file but in the dahsboard it appears as "no results found", as in the official site: https://www.logbinder.com/Content/Solutions/splunkapp1.jpg everything else works fine. How can I simulate a test in my AD to have results in this "Delegation" panel?     'filter_dc_winseclog_events' EventCode=5136 AttributeLDAPDisplayName=nTSecurityDescriptor | transaction maxspan=5s Correlation_ID | eval ObjectClass=if(ObjectClass="organizationalUnit" OR ObjectClass="group" OR ObjectClass="user" OR ObjectClass="computer" OR ObjectClass="domainDNS" OR ObjectClass="groupPolicyContainer",ObjectClass,"other") | rename ObjectClass as "Object Type" | rename DirectoryServiceName as Domain | timechart count by "Object Type"     Thanks Paulo