All Topics

Top

All Topics

I recently installed the Splunk Add-on for Microsoft Security to Splunk Cloud and configured it to connect via API to an app registered in Azure. The data is still not loading on the Security section... See more...
I recently installed the Splunk Add-on for Microsoft Security to Splunk Cloud and configured it to connect via API to an app registered in Azure. The data is still not loading on the Security section of the Microsoft 365 App for Splunk. I checked in Azure and Incident.Read has permissions enabled on the app. The Splunk documentation says that I should go to Add-on > Inputs and click Create New Input to complete the configuration. When I go to the Inputs page I get the message: "Failed to load Inputs Page This is normal on Splunk search heads as they do not require an Input page. Check your installation or return to the configuration page. Error: Request failed with status code 500".  I am not sure how I can fix this as I have no other place to put the endpoints as Inputs.
I want to monitor the critical services running status on a Windows server. Need to trigger ticket if any of the critical service is stopped for x number of minutes. Can this be implemented as normal... See more...
I want to monitor the critical services running status on a Windows server. Need to trigger ticket if any of the critical service is stopped for x number of minutes. Can this be implemented as normal alert or needs episode to be created? Normal alert have drawback of triggering again and again in case of service is down for longer time.
Have a query comprised of 2 subqueries (joins).  Output is exactly as expected.  When I try to push that data to a summary index, only the fields from the original query make it, for all fields and ... See more...
Have a query comprised of 2 subqueries (joins).  Output is exactly as expected.  When I try to push that data to a summary index, only the fields from the original query make it, for all fields and event data generated from the sub queries there is nothing.    Finally, when I run the query (including '|collect index=summary' as the last line) everything expected is in the output, just not making it to the summary index.       index=blah_blah <followed by a search> | join [<search string1> [ <search string 2]] | fields _time IP DNS NETBIOS TRACKING_METHOD OS TAGS QID TITLE TYPE SEVERITY STATUS LAST_SCAN_DATETIME LAST_FOUND_DATETIME LAST_FIXED_DATETIME PUBLISHED_DATETIME THREAT_INTEL_VALUES THREAT_INTEL_IDS CVSS_V3_BASE VENDOR_REFERENCE RESULTS | collect index=summary       Output is fully populated, yet summary index is missing several fields (and the associated data). Note: the missing fields in the summary index are all from the sub-searches/join.  
Hoping someone can help me get past the last hurdle.  I'm trying to create a custom function that dynamically calls other custom functions.  I've got the part of generating the list of desired ... See more...
Hoping someone can help me get past the last hurdle.  I'm trying to create a custom function that dynamically calls other custom functions.  I've got the part of generating the list of desired functions.  I understand how to make sure the datapath into the dynamically selected custom function.  I want to pass the results out to a filter object, but it seems to be coming out only as a single variable. not an array. What am I missing?    def rule_check(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, **kwargs): phantom.debug('rule_check() called') custom_function_results_data_1 = phantom.collect2(container=container, datapath=['build:custom_function_result.data.data_packets.*.packet'], action_results=results) custom_function_results_data_2 = phantom.collect2(container=container, datapath=['get_funcs:custom_function_result.data.found_functions.*.function_path'], action_results=results) custom_function_results_item_1_0 = [item[0] for item in custom_function_results_data_1] custom_function_results_item_2_0 = [item[0] for item in custom_function_results_data_2] rule_check__data = None ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... parameters = [] for item0 in custom_function_results_data_1: parameters.append({ 'data_w_fields': item0[0], }) for func in custom_function_results_item_2_0: a = phantom.custom_function(custom_function=func, parameters=parameters, name='rule_check') ################################################################################ ## Custom Code End ################################################################################ phantom.save_run_data(key='rule_check:data', value=json.dumps(rule_check__data)) filter_1(container=container) return    
  I am trying to import data reading a file .But I keep getting the below error in internal logs   INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire fi... See more...
  I am trying to import data reading a file .But I keep getting the below error in internal logs   INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='/usr/local/ios/var/logs/PN_Usage_iujj_Jun28.22.10.56.csv'   07-15-2022 11:37:42.256 -0400 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='/usr/local/ios/var/logs/PN_Usage_iuhg_Jun28.22.16.16.csv'. inputs [monitor:///usr/local/ios/var/logs/PN_Usage_*.csv] index = xyz sourcetype=ios:pn:usage #crcSalt = vmr initCrcLength = 10000   props [ios:pn:usage] CHARSET=UTF-8 LINE_BREAKER=([\r\n]+)\"\d+\-\d+\-\d+\_\d+\:\d+ MAX_TIMESTAMP_LOOKAHEAD=17 NO_BINARY_CHECK=null SHOULD_LINEMERGE=false disabled=false pulldown_type=true TIME_FORMAT=%Y-%m-%d_%H:%M TIME_PREFIX=\"   Sample events: 2022-07-14_15:35, PO@abc, InOctets, 4541070, OutOctets, 12763951, Total MB used, 2.163127625 2022-07-14_15:35, BE@abc, InOctets, 75945647, OutOctets, 650376983, Total MB used, 90.79032875   Is there any other settings I need to include or remove.   Thanks in Advance
Let's say I have a multivalue fieldA and a fieldB. I know you can do something like "| where field=value" in a search or just have it in the first part of the search arguments, but is it possible to ... See more...
Let's say I have a multivalue fieldA and a fieldB. I know you can do something like "| where field=value" in a search or just have it in the first part of the search arguments, but is it possible to do something for where I use all returned values part of fieldA as the search for fieldB?
Hi All,  ServiceNow supports multiple ticket types such as "RITM", "SCTASK", "INCIDENT".  Our Splunk Cloud instance today can only create "INCIDENT" type tickets.  Very curious if Splunk SOAR can e... See more...
Hi All,  ServiceNow supports multiple ticket types such as "RITM", "SCTASK", "INCIDENT".  Our Splunk Cloud instance today can only create "INCIDENT" type tickets.  Very curious if Splunk SOAR can extend this functionality and let us create "SCTASK", which is our preferred task types in the ticketing system.  Thanks~!
Hi all, I have events coming in that have multivalue fields, but not always the same fields are multivalue. I want all the fields in the events resulting from a search to be concatenated to single ... See more...
Hi all, I have events coming in that have multivalue fields, but not always the same fields are multivalue. I want all the fields in the events resulting from a search to be concatenated to single value field. Example: Result now shows: dest       xyz                 fff Result should show: dest   xyz [delimiter] fff Just to be sure that everyone understand using dest here is an example it should be a query that I can run that would actually change every multivalue field regardless of field name. Cheers,
Hi Splunkers, I installed a private app with its own set of roles on splunk cloud instance using victoria experience but i couldnt find those roles in settings--> roles . My authorize.conf is fine,... See more...
Hi Splunkers, I installed a private app with its own set of roles on splunk cloud instance using victoria experience but i couldnt find those roles in settings--> roles . My authorize.conf is fine, if tried locally. Not sure what could be the possible reason. Is there any limitations on using app specific roles on cloud instance. Please suggest  
Hi folks,   I have a HF already sending data to one cloud instance, however I'd like to start sending data to a different cloud stack from the same HF.   Does anyone can give an example of th... See more...
Hi folks,   I have a HF already sending data to one cloud instance, however I'd like to start sending data to a different cloud stack from the same HF.   Does anyone can give an example of the configuration in outputs.conf? Should I configured it in local or default? Should I use different receiving ports for this configuration? If so, which one do you recommend? I appreciate your help. Thanks.
Hello, We are using Splunk v8.2.5 (Build:77015bc7a462 if this helps). Since we upgraded we no longer receive errors or warnings when stats, eventstats, or streamstats is not returning the correct... See more...
Hello, We are using Splunk v8.2.5 (Build:77015bc7a462 if this helps). Since we upgraded we no longer receive errors or warnings when stats, eventstats, or streamstats is not returning the correct values. We have a lookup csv of nearly 3 million records containing several fields that need to be counted and compared: prize code, address, email, etc. The eventstats command fails and there is no error or warning. However, the stats command works. This would be okay if we had only one field to be counted. We have 4-8 fields that must be counted and compared. Using the stats command quickly becomes a nightmare because every field that is not being counted in relation to the particular field in the by clause would need to be added using values(FIELDNAME). The eventstats command would be cleaner. Or would it?       | bin _time span=1d | eventstats count(prize_code) as count_prize_code by _time, address | dedup address, count_prize_code, _time | eventstats count(_time) as count_prize_code_dates, sum(count_prize_code) as sum_count_prize_code by address | dedup address, count_prize_code_dates, sum_count_prize_code | table address, count_prize_code_dates, sum_count_prize_code       Thanks and God bless, Genesius
Hello, I have distributed environment with IDX cluster and DS. DS is used for deploy config to IDX cluster Manager Node and from it to IDX cluster nodes then. It is working fine. I upgraded DS from... See more...
Hello, I have distributed environment with IDX cluster and DS. DS is used for deploy config to IDX cluster Manager Node and from it to IDX cluster nodes then. It is working fine. I upgraded DS from 8.1.6 to 8.1.10.1 (yes, because SVD-2022-0608...). Manager Node is on 8.1.6. After upgrade I noticed this log messages on MN: 10.88.28.93 - - [13/Jul/2022:15:56:33.540 +0200] "GET /services/server/info HTTP/1.1" 401 130 "-" "Splunk/8.1.10.1 (Linux 3.10.0-1160.62.1.el7.x86_64; arch=x86_64)" - 0ms  10.88.28.93 is IP address of DS I checked Search peers config on DS and there was MN in "sick" state. I edited its config by re-enter Remote username and Remote password and then MN changed status to Healthy and everything is working fine. My question is: what happened during upgrade of DS? My idea is that new pair of private+public keys was generated on DS on first run after upgrade (and then I had to distribute new public key to MN by re-entering Remote username and password of course), but am I right? And if I am right, why this happened? I made many Splunk upgrades before and I experienced this never before... Any info/hint/clue will be highly appreciated. Thank you. Best regards Lukas Mecir
I'm running Splunk Enterprise 8.2.5 with deployment server on Windows 2019. I'm deploying the Splunk-Addon for Unix app to my Linux estate. The app runs various .sh shell scripts to capture data and ... See more...
I'm running Splunk Enterprise 8.2.5 with deployment server on Windows 2019. I'm deploying the Splunk-Addon for Unix app to my Linux estate. The app runs various .sh shell scripts to capture data and ship back to the indexers. The problem is that these shell script have no execute permission when deployed. I have to run a script at the forwarder to add the execute bit for the Splunk user so that the UF can run them. This is fine as a once off but if we updated the deployment app it occurs again. Is there any way to handle this with Splunk itself? 
I have a data with two fields: User and Account Account is a field with multiple values. I am looking for a search that shows all the results where User is NOT matching any of the values in Account... See more...
I have a data with two fields: User and Account Account is a field with multiple values. I am looking for a search that shows all the results where User is NOT matching any of the values in Account. From the below mentioned sample data, the search should only give "Sample 1" as output Sample 1 User Account p12345 redfox   h12345   home\redfox   new@redfox.com   Sample 2 User Account L12345 redsox   L12345   sky\newid   sam@redsox.com   I have tried makemv, but not getting desired output
is it possible to change the log rotation timing for the internal logs that Universal Forwarder and Heavy Forwarder output to the OS. For example, splunkd.log. Currently, the logs are rotated by fi... See more...
is it possible to change the log rotation timing for the internal logs that Universal Forwarder and Heavy Forwarder output to the OS. For example, splunkd.log. Currently, the logs are rotated by file size, but can we rotate on a daily basis. is it possible ?
It's a bit off-topic but I have a kinda unusual use case. I want to get the events out of windows box and store it on a linux machine (in this particular case it's windows VM and I want to export the... See more...
It's a bit off-topic but I have a kinda unusual use case. I want to get the events out of windows box and store it on a linux machine (in this particular case it's windows VM and I want to export the events to the hypervisor). Of course for linux it's easiest to receive syslog messages but as we all know, Windows doesn't have built-in syslog server and you can't easily get the events with built-in windows tools to push through syslog channel. So far I've been using the free SolarWinds Event Log Forwarder but it has its flaws - most notably it has problems with starting automatically with the Windows machine. It ends up with the process started but it's not forwarding events unless I manually disable and re-enable the subscriptions. That's unacceptable. So I was thinking that maybe I should just install UF and instead of using splunk-tcp output just push events with plain tcp output to a syslog server. Anyone has experience with it? The upside to this is that I know that UF works relatively reliably and I wouldn't have to worry about it too much. The downside is that I would have to define a separate input for each event log channel (but I think I'd simply script it and have it run every few days to synchronise eventlog channels with inputs.conf). I could of course set up whole Splunk Free environment on my hypervisor but it would be a huuuuuge overkill. Any hints for the UF installation/configuration?
I'm bemused with Splunk again (otherwise I wouldn't be posting here ;-)). But seriously - I have an indexer cluster and two separate searchhead clusters connected with that indexer cluster. One shc... See more...
I'm bemused with Splunk again (otherwise I wouldn't be posting here ;-)). But seriously - I have an indexer cluster and two separate searchhead clusters connected with that indexer cluster. One shcluster has ES installed, one doesn't. Everything seems to be working relatively OK. I have a "temporary" index into which I ingest some events from which I prepare a lookup by means of a report containing some search ending with | outputlookup. And that also works OK. Mostly. Because it used to work on an "old" shcluster (the one with ES). And it still does. But due to the fact that we have a new shcluster (the one without ES) and of course lookups are not shared between different shclusters, I defined a report on the new cluster as well. And here's where the fun starts. The report is defined and works great when run manually. But I cannot schedule it. I open the "Edit Schedule" dialog, i fill in all the necessary fields, I save the settings... and the dialog closes but nothing happens. If I open the "Edit Schedule" dialog again, the report is still not scheduled. To make things more interesting, I see entries in conf.log but they do show:      payload: { -        children: { -          action.email.show_password: { +          }          dispatch.earliest_time: { +          }          dispatch.latest_time: { +          }          schedule_window: { -            value: 15          }          search: { +          }        }       value: }  So there are _some_ schedule-related parameters (and yes - if I verify them in etc/users/admin/search/local/savedsearches.conf they are there) dispatch.earliest_time = -24h@h dispatch.latest_time = now schedule_window = 15  But there is no dispatch schedule being applied nor is the schedule enabled at all (the enableSched value is not pushed with the confOp apparently). So I'm stuck. I can of course manually edit the savedsearches.conf for my user but that's not the point. The version is 8.2.6.
Hello, We have a use case. Using the Splunk DB Connect, we ingest data from the various systems especially from the ERP. Every change on an article in the ERP is pushed into a temp DB which is ... See more...
Hello, We have a use case. Using the Splunk DB Connect, we ingest data from the various systems especially from the ERP. Every change on an article in the ERP is pushed into a temp DB which is monitored by the SPLUNK DB connect. There a millions of data movements each day.  But in the end of the day, we just need to work with the latest unique data that are in the system for each article. Each event has some 10-30 fields. What is the best way to getting rid of all the duplicates that are comming into the system ? Delete ? How ?  skip ? Lookup ? Summary DB ? How ?  What are the ideas that you might have or maybe some ideas i'm missing ?
Dear All, I am a rookie in Splunk and need your help to extract a fields from the log, Example: 2022-07-15 14:30:43 , Oracle WebLogic Server is fully supported on Kubernetes , xsjhjediodjde,"ap... See more...
Dear All, I am a rookie in Splunk and need your help to extract a fields from the log, Example: 2022-07-15 14:30:43 , Oracle WebLogic Server is fully supported on Kubernetes , xsjhjediodjde,"approvalCode":"YES","totalCash":"85000","passenger":"A",dgegrgrg4t3g4t3g4t3g4t,rgrfwefiuascjcusc, In this log i would like to have a extract as Cash and display the value in a tabular form as Date|Passenger|Amount  Please suggest.
Hi Everyone, I am writing to you to seek support on configuring Dell EMC Isilon Add-on for Splunk. Installed app (Dell EMC Isilon Add-on for Splunk Enterprise) in our dev environment in one of ... See more...
Hi Everyone, I am writing to you to seek support on configuring Dell EMC Isilon Add-on for Splunk. Installed app (Dell EMC Isilon Add-on for Splunk Enterprise) in our dev environment in one of our indexer. The Isilon version used is Isilon OneFS 9.2.1.7 Splunk Version: 8.0.4 2. Is this add-on compatible with Isilon version (9.2.1.7). As per splunkbase documentation for this add-on:     Are the below commands mandatory from Isilon side (As per splunkbase documentation):   Enabling audit on any of the Isilon storage may increase the resource utilization leading to performance degradation - so we are a but skeptical about it. 3. On set-up page of add-on in splunk, while giving the isilon cluster ip, username and password and clicking on save we were getting the error - "Error occured while authenticating to Server".   On checking the emc_isilon.log:     Changed the isilonappsetup.conf file with the below settings (verify = False) so to ensure if the above error is because of any certificate issue, then the certificates are not considered.. this was just for a quick testing to make a point on certificates: Can someone help on this? Thanks in advance