All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello I'm an employee of MEGAZONE CLOUD. We recently decided to conduct a test with Splunk and received a 50GB license. But I don't know how to register this. Please tell me how to register the... See more...
Hello I'm an employee of MEGAZONE CLOUD. We recently decided to conduct a test with Splunk and received a 50GB license. But I don't know how to register this. Please tell me how to register the license.
(New splunk user) I want to use the Cyberark Rest Api login event for Splunk. So is there a way to access Rest API data directly from Splunk? or  Do I need to use Rest Api, some programming like... See more...
(New splunk user) I want to use the Cyberark Rest Api login event for Splunk. So is there a way to access Rest API data directly from Splunk? or  Do I need to use Rest Api, some programming like Python to get the data and then send the splunk? or direct rest api endpoint binding to use in splunk install available? Is there any idea about this flowchart to connect Splunk?    
Any recommendations out there which existing Data Model would be best to match up Qumulo (network drive file access, mods, delete, read, and so on) log events to for CIM compliance?  One might think ... See more...
Any recommendations out there which existing Data Model would be best to match up Qumulo (network drive file access, mods, delete, read, and so on) log events to for CIM compliance?  One might think the "Data Access" DM but the fields are not even close; the Endpoint.filesystem DM appears to be my best option.  
Running a dbxquery through jobs.export my results are limited to 100k rows. Do I need to paginate streaming results?  Here's my code:     data = { 'adhoc_search_level': 'fast', ... See more...
Running a dbxquery through jobs.export my results are limited to 100k rows. Do I need to paginate streaming results?  Here's my code:     data = { 'adhoc_search_level': 'fast', 'search_mode': 'normal', 'preview': False, 'max_count': 500000, 'output_mode': 'json', 'auto_cancel': 300, 'count': 0 } job = service.jobs.export(<dbxquery>, **data) reader = results.JSONResultsReader(job) lst = [result for result in reader if isinstance(result, dict)]       This runs correctly except that that results are always stopped at 100k rows, it should be over 200k.
when i installed it , i have login creditinals , and through this i have logged in splunk website and asked this question that when i enter the credentials , its doesnt let me log in, always shows lo... See more...
when i installed it , i have login creditinals , and through this i have logged in splunk website and asked this question that when i enter the credentials , its doesnt let me log in, always shows login failed
I am trying to build an Alert which will trigger whenever one of our AWS-hosted Active Directory domains get replacement Domain Controllers, i.e., we don't control if/when they replace the servers. I... See more...
I am trying to build an Alert which will trigger whenever one of our AWS-hosted Active Directory domains get replacement Domain Controllers, i.e., we don't control if/when they replace the servers. I already have a simple Alert which counts how many unique DCs it sees per-hosted domain, and then I can do a simple: index=os sourcetype="xmlwineventlog # here I perform some clean-up to identify the 2 desired fields... # stats count Domain, DC_hostname stats count Domain where count>2 (and where the default number of DCs = 2, i.e., if there are more than that, AWS is in the process of replacing one or both.) The problem is that I lose the list of DCs. How can I filter-out all the domains that just have the typical 2 DCs while still keeping the complete list of DCs from the non-typical domain? ------------------------- FYI  - this is what the search looks like before my final filter: Domain         DC_hostname ----------     ----------------------------- domain1        DC1 domain1        DC2 domain2        DC3 domain2        DC4 domain2        DC5 My current Alert returns simply: domain2 whereas I want it to return: domain2        DC3 domain2        DC4 domain2        DC5  
As far as I know using mvcommand only creates an MV field out of values from a single field. In a column for example. I need to combine several fields to a single MV_field but all these fields have d... See more...
As far as I know using mvcommand only creates an MV field out of values from a single field. In a column for example. I need to combine several fields to a single MV_field but all these fields have different names.  For example, I have field1, field2, field3. And I need a single MV_field containing values for all of them. Also, it would nice if this could be dynamic in a way that I can combine 'field*' to 'MV_field' with all the values. I am able to accomplish combining the different fields using evals mvappend function, but it doesn't take wildcards.   Example, "| eval MV_field=mvappend(field1,field2,field3)" works. But there isn't always the same amount of fields.  It would be really nice to be able to do "| eval MV_field=mvappend(field*)" to simply catch all that exist and throw them in a single MV_field. Is this possible?
I have a field names "code_value" which has the values as follows    code_value ABC-123 JHLIK ABC-456 LKJF ABC-781 klklk ABC-22 olsd   Now how do I extract the code_value field anything that come... See more...
I have a field names "code_value" which has the values as follows    code_value ABC-123 JHLIK ABC-456 LKJF ABC-781 klklk ABC-22 olsd   Now how do I extract the code_value field anything that comes before a space? something like below  new_field_derived_from_code_value ABC-123 ABC-456 ABC-781 ABC-22
I'm deploying splunk to monitor pods over kubernetes but we want to capture every event into every Pod (standard output). Is that possible to capture without creating persistent volume?   Regards... See more...
I'm deploying splunk to monitor pods over kubernetes but we want to capture every event into every Pod (standard output). Is that possible to capture without creating persistent volume?   Regards  
Hello everyone, I have built a search that returns the email sender address as sender, its recipients list as recipient, and the number of emails received. One event looks like this: sender      ... See more...
Hello everyone, I have built a search that returns the email sender address as sender, its recipients list as recipient, and the number of emails received. One event looks like this: sender                                                                        recipient                 nr of emails sent user.sender@outsidecompany.com user1@company.com 16                                                                           user2@company.com                                                                          user3@company.com                                                                          user4@company.com                                                                          user5@company.com                                                                          user6@company.com                                                                          user7@company.com I want to define the recipient field values to be 10 recipients or more because let's say I'm not interested to see outside emails from a sender that has sent an email to less than 10 people inside company.com. Do you have any idea? Best regards.
I have the following search:     index=sandbox document_type=test-collat-record-json_v2 | where ((isnotnull(test_result)) AND project_short="LNL" AND collateral_type="fw" AND ingredient_type="... See more...
I have the following search:     index=sandbox document_type=test-collat-record-json_v2 | where ((isnotnull(test_result)) AND project_short="LNL" AND collateral_type="fw" AND ingredient_type="ifwi_bin" AND ingredient="csme") | dedup test_collat_record_json_guid | join type=inner left=L right=R where L.project_short=R.project_short L.collateral_type=R.collateral_type L.ingredient_type=R.ingredient_type L.ingredient=R.ingredient [search document_type=test-collat-record-summary-json] | table L.collat_record_json_guid, L.project_short, L.collateral_type, L.ingredient_type, L.ingredient, L.version, L.test, L.test_result, R.number_of_tests, R.passing_threshold     I'm joining data from a set of test results and then I lookup info about what a passing set of results should look like from another data source. Hence the join. It's good. It works for me and the result yields the table: So great. Just want to aggregate the results and get counts of passing/failing tests and compare that with the passing_threshold field. So I added:      | stats count(eval(L.test_result=="SUCCESS")) as passingTests count(eval(L.test_result=="FAILURE")) as failingTests values(R.number_of_tests) as numTests, values(R.passing_threshold) as pass_threshold by L.collat_record_json_guid      But the two evaluations of success and failure tests are zero. But from the table above they are clearly not zero. Should be 2 and 1 respectively. What have I done wrong? Is eval not going to work on joined data? I am using the correct aliases for the data.
Hi I'm trying to save the results of 2 queries on a dash to a token and then add them up into a 3rd query. I'm running 8.2 and and it's beginning to look like all this only became available in 9 ... See more...
Hi I'm trying to save the results of 2 queries on a dash to a token and then add them up into a 3rd query. I'm running 8.2 and and it's beginning to look like all this only became available in 9 ? https://docs.splunk.com/Documentation/Splunk/9.0.0/DashStudio/searchTokens I dont see the button described here In the Edit Data Source panel, check the box for Use search results or job status as tokens.  I tried some of the stuff like job.resultCount etc...but cannot get anything to interpolate. Am I totally out of luck for v8.2 ?
Hello   We recently upgraded Splunk to v8.2.6.  This broke the Demisto integration so I upgraded the app to v4.0. This still has not fixed the issue. The alert will trigger but it does not send t... See more...
Hello   We recently upgraded Splunk to v8.2.6.  This broke the Demisto integration so I upgraded the app to v4.0. This still has not fixed the issue. The alert will trigger but it does not send to Demisto. Looking in the demisto.log file I found this:       2022-08-08 10:25:42,792 - DEMISTOALERT - INFO - In Main Method 2022-08-08 10:25:42,801 - DEMISTOALERT - ERROR - Error in main, error: name 'basestring' is not defined Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-Demisto/bin/demisto_send_alert.py", line 126, in <module> modaction = DemistoAction(sys.stdin.read(), modular_action_logger, 'demisto') File "/opt/splunk/etc/apps/TA-Demisto/bin/lib/cim_actions.py", line 136, in __init__ if isinstance(self.sid, basestring) and 'scheduler' in self.sid: NameError: name 'basestring' is not defined         I went to the configuration page to update configurations but it gives me an error screen:   Anyone have any ideas? Thnaks!
I created savedsearches.conf file to create a splunk alert and restart the splunk service, but I still can't see the new alert in the UI, I am using the following configuration: Thanks in adva... See more...
I created savedsearches.conf file to create a splunk alert and restart the splunk service, but I still can't see the new alert in the UI, I am using the following configuration: Thanks in advance!
I am creating a dashboard to show any new logs that are added to our environment within a period of time. For example - if we started ingesting AWS logs and Azure logs 2 days ago is there a way I ... See more...
I am creating a dashboard to show any new logs that are added to our environment within a period of time. For example - if we started ingesting AWS logs and Azure logs 2 days ago is there a way I can create a dashboard that shows these 2 new ingestions?  I am having trouble making a search query that can display a new value with the name of the recently added index added to the environment. Does anyone have any suggestions on how to solve this? Thanks.
Hello, i have a big doubt about the RF behavior about single and multi site cluster. When a single site is used an hypothetical configuration: Replication Factor=2 is quite easy i have two copies... See more...
Hello, i have a big doubt about the RF behavior about single and multi site cluster. When a single site is used an hypothetical configuration: Replication Factor=2 is quite easy i have two copies of the same data in the site (originating + copy). And only one peer can goes down In a multi site (example two sites) if i understood, with:  -  site_replication_factor = origin:1,site1:1,site2:1,total:2 - there are two copies (originating site=1 other site=1). Only one peer can be down, is it in total or one at site ? -   site_replication_factor = origin:2,site1:1,site2:1,total:3 - there are three copies (originating site=2 other site=1)  Only two peer scan be down, is it in total or two at site ? Using   site_replication_factor = origin:1,site1:1,site2:1,total:2 means that if i loss the peer in originating site the SHs redirect query to the second site (SF=2) ? Thanks  
I am trying to club data from one source type with a search input from a formatted CSV file, however I can send only one value as the input for the search. My requirement is that with that input val... See more...
I am trying to club data from one source type with a search input from a formatted CSV file, however I can send only one value as the input for the search. My requirement is that with that input value I want to send 2 or 3 related fields for the final output.   index=cdr source=* sourcetype=cdr globalCallId_ClusterID=main destDeviceName IN ( [ |inputlookup Wireless.csv |rex field=USERID "(?<USERID>\w{6})$" | eval destDeviceName="ABC" + 'USERID' + "*" | table destDeviceName | mvcombine destDeviceName | nomv destDeviceName | return $destDeviceName]) | table globalCallId_ClusterID globalCallID_callId callingPartyNumber originalCalledPartyNumber origDeviceName destDeviceName DateTimeOrigination DisconnectTime duration     The above query gives me a user with specific values which will match the input for destDeviceName , however when that is formatted in table i want to add additional fields to the tabel that corresponds to the input look up file.  
If we are running Splunk UBA in cluster with three nodes, is there a way that we can cut off one node and push all the running tasks into the other two nodes, or even one node?
Hi All, According to the documentation for Splunk Cloud Classic Experience  If your Splunk Cloud Platform deployment is on Classic Experience, you can manage your indexes programmatically using... See more...
Hi All, According to the documentation for Splunk Cloud Classic Experience  If your Splunk Cloud Platform deployment is on Classic Experience, you can manage your indexes programmatically using the Splunk REST API cluster_blaster_indexes/sh_indexes_manager endpoint. Manage indexes on Splunk Cloud Platform Classic Experience - Splunk Documentation When I used the command:  curl -k -H "Authorization: Bearer MyToken" https://MySplunk.splunkcloud.com:8089/services/cluster_blaster_indexes/sh_indexes_manager?output_mode=json   I get this respond: <!doctype html><html><head><meta http-equiv="content-type" content="text/html; charset=UTF-8"><meta http-equiv="refresh" content="1;url=https://XX.splunkcloud.com/en-US/servicesNS/nobody/search/data/indexes"><title>303 See Other</title></head><body><h1>See Other</h1><p>The resource has moved temporarily <a href="https://XX.splunkcloud.com/en-US/servicesNS/nobody/search/data/indexes">here</a>.</p></body></html> This brings me to a 404 page  Basically I want to create an Index using REST API on Splunk Cloud (Classic Experience)
Hi Everyone, we have another internal team that is trying to use the API to return some data we built for them. Unfortunately, they aren't able to get the payload but only the headers. Can someone su... See more...
Hi Everyone, we have another internal team that is trying to use the API to return some data we built for them. Unfortunately, they aren't able to get the payload but only the headers. Can someone suggest a solution or what we are doing wrong? the below is the response from the splunk API on their call.   Target: https://SomeHost:Port/servicesNS/user/search/search/jobs/export  Request body:         search=search inputlookup somefile.csv | table Day User emp_id Data           Response:         <results preview='0'> <meta> <fieldOrder> <field>Day</field> <field>User</field> <field>emp_id</field> <field>Data</field> </fieldOrder> </meta> </results>