All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Everyone,   I set splunk(on windows) lab envirement because try something threat activity.I need to take powershell logs on splunk but there is no option application and services log on splunk w... See more...
Hi Everyone,   I set splunk(on windows) lab envirement because try something threat activity.I need to take powershell logs on splunk but there is no option application and services log on splunk web gui (data input).     I want to take this logs.      
My query , index=s_New sourcetype IN (Compare,Fire) | stats values(*) as * values(sourcetype) as sourcetype by sysid _time | fillnull value="" | eval Status=if(Fire_Agent_Version = "" AND Compare... See more...
My query , index=s_New sourcetype IN (Compare,Fire) | stats values(*) as * values(sourcetype) as sourcetype by sysid _time | fillnull value="" | eval Status=if(Fire_Agent_Version = "" AND Compare_Agent_Version = "","Not Covered","Covered")  | search OS="*" Group="*" Name="***" Environment="*" | timechart span=1d count by Status | addtotals | eval "Covered %"=round((Covered/Total)*100,2) | eval "Not Covered %"=round(('Not Covered'/Total)*100,2) | fields _time "Covered %" "Not Covered %" The above search not providing expected count as i get i get for Status count  as below , iindex=s_New sourcetype IN (Compare,Fire) | stats values(*) as * values(sourcetype) as sourcetype by sysid  | fillnull value="" | eval Status=if(Fire_Agent_Version = "" AND Compare_Agent_Version = "","Not Covered","Covered")  | search OS="*" Group="*" Name="***" Environment="*" | stats count by Status | eventstats sum(*) as sum_* | foreach * [ eval "Status %"=round((count/sum_count)*100,2)] | rename count as Count | fields - sum_count | sort - Count I think i am missing  something in timechart search .How to get he exact count for timechart as in below search using stats alone.  
Hello,  I'm trying to filter one lookup with the values of an other lookup. This is the situation: Lookup roles.csv contains the field with the security roles I would like to check for: Role... See more...
Hello,  I'm trying to filter one lookup with the values of an other lookup. This is the situation: Lookup roles.csv contains the field with the security roles I would like to check for: Role  role1 role2 role3 role6 Lookup AssignedRoles.csv contains a field with all the assigned roles  User Role User1 role2 role5 User2 role6 User3 role9 role8 User4 role7 role4 User5 role1 role2   Now I want to return a table with all the users in AssignedRoles.csv that have an assigned Role from Roles.csv Can anybody help me with an example query, if it is at all possible? Thanks, Robin
Hi Friends, does anyone know the Application for cyberoam firewall? After selling Cyberoam to Sophos, other Application  have been removed from Splank site and there is only one addon. i need Cybe... See more...
Hi Friends, does anyone know the Application for cyberoam firewall? After selling Cyberoam to Sophos, other Application  have been removed from Splank site and there is only one addon. i need Cyberoam Application for splunk
I'm busting my head and I can't seem to get any where. I currently have all my F5 logs going into sourcetype f5:bigip:syslog and I would like to split the data into 2 and create 2 new sourcetypes, I'... See more...
I'm busting my head and I can't seem to get any where. I currently have all my F5 logs going into sourcetype f5:bigip:syslog and I would like to split the data into 2 and create 2 new sourcetypes, I'd like to do that based on the format of the data. Is there someone who can explain how to go about this? Basically I want to pull out the APM and http logs into the 2 new sourcetypes, that is what I want to achieve.  Logs are being sent in syslog via a UF so I know I need to do this on the Indexers. Will I have to create a custom app?
Hello team,    I am facing an issue while trying to extract the below events. Please help in this.   Event: 150022 High 2021.11.22 03:32:44 App Proxy: Utilization of preprocessing manager proces... See more...
Hello team,    I am facing an issue while trying to extract the below events. Please help in this.   Event: 150022 High 2021.11.22 03:32:44 App Proxy: Utilization of preprocessing manager processes over 80% prd-Server06 1.2.3.4 Utilization of preprocessing manager internal processes, in % 100 %   Extraction used: ^(?:[^:\n]*:){2}\d+\s+(?P<field1>[^\t]+)(?:[^\.\n]*\.){3}\d+\s+(?P<field2>[^ ]+)(?:[^ \n]* ){7}\%\s(?P<field3>.+)   Although all other fields are extracted as expected. The field2 is unable to extract the highlighted/underlined field. Please let me know how I may fix the field2 here.
Hi! I have a setup where I must clone and forward data to a third party. Can somebody clarify if I disable useACK that even though a destination is unreachable that the flow to other outputs does no... See more...
Hi! I have a setup where I must clone and forward data to a third party. Can somebody clarify if I disable useACK that even though a destination is unreachable that the flow to other outputs does not stall? If I look at the spec of outputs.conf, for me it's not fully clear what happens when I write to the socket and the destination is unreachable:   * When set to "false", the forwarder considers the data fully processed when it finishes writing it to the network socket.   Thanks in advance!
Hi Team,   I did a registration on splunk-victorops. But I couldn't be able to verify my mobile number (verification code is not getting received). Given my account details below.   Email - dann... See more...
Hi Team,   I did a registration on splunk-victorops. But I couldn't be able to verify my mobile number (verification code is not getting received). Given my account details below.   Email - danny@prezzee.com User Name - danny_hamshananth Contact Number - +94 75 074 0708 (Sri Lanka)   Need your prompt support.   Thanks & Regards, Danny Hamshananth
Splunk Trial on Linux WSL 20.04 The splunk file has been downloaded, extracted and when starting splunk there is the following error: homePath='/opt/splunk/var/lib/splunk/audit/db' of index=_audit ... See more...
Splunk Trial on Linux WSL 20.04 The splunk file has been downloaded, extracted and when starting splunk there is the following error: homePath='/opt/splunk/var/lib/splunk/audit/db' of index=_audit on unusable filesystem. Validating databases (splunkd validatedb) failed with code '1'. I have checked the location, opt/splunk/var/lib/splunk/audit/db and it contains the following files: test.2BREdp test.PNih3U
I have raw data, I would like to search for domains within the data, output it to a field and then run stats to show a count of each unique domain.  Example of raw data: "This investigation is ... See more...
I have raw data, I would like to search for domains within the data, output it to a field and then run stats to show a count of each unique domain.  Example of raw data: "This investigation is really great and we found the suspicious domain google.com" I would like to: 1. search for domains within raw data and output the domain to a field that I can show in a table (Lets call it "Domain") 2. run stats that show the number of occurrences So ideally, my finished result would be: Domain count google.com 50 yahoo.com 30   Any assistance is greatly appreciated, thank you. 
I am trying to connect the AWS Lambda with AppDynamics but it is giving this error, Your SaaS subscription tier does not support Serverless APM for AWS Lambda. Please Help me. @here
I have the following conf file configs to drop any event that contains the verbiage, "Allow all zones to query DNS anycast address", in Palo logs: props.conf [pan:traffic] TRANSFORMS-paloanycast = p... See more...
I have the following conf file configs to drop any event that contains the verbiage, "Allow all zones to query DNS anycast address", in Palo logs: props.conf [pan:traffic] TRANSFORMS-paloanycast = paloruledrop transforms.conf [paloruledrop] REGEX = Allow all zones to query DNS anycast address DEST_KEY = queue FORMAT = nullQueue Snippet of event     Nov 20 07:11:00 firewall,2021/11/20 07:10:59,010701010006,TRAFFIC,end,2305,2021/11/20 07:10:59,x.x.x.x,y.y.y.y,0.0.0.0,0.0.0.0,Allow all zones to query DNS anycast address,,,dns,vsys7,vrf,xxx-xx-x,xxx.xxx,xxx.xxx,data-data-data,2021/11/20...     Applied the  changes to my cluster master and pushed to the indexers (verified the indexers got the config), but these "Allow all zones to query DNS anycast address" events are not being dropped. I verified the regex on regex101.com Any help would be appreciated Thx
Hi I  need to find 5 "Errors" peak points by server and sort by date   here is my spl: index="myindex" err* | rex field=source "\/data\/(?<product>\w+)\/(?<date>\d+)\/(?<servername>\w+)" | event... See more...
Hi I  need to find 5 "Errors" peak points by server and sort by date   here is my spl: index="myindex" err* | rex field=source "\/data\/(?<product>\w+)\/(?<date>\d+)\/(?<servername>\w+)" | eventstats count as Errors by servername     expected output: servername                               Time                                                           peak points Errors count server1                                       2021-11-19 02:00:00,000          500                                                             2021-11-19 10:00:00,000         450                                                           2021-11-19 18:00:00,000        300                                                           2021-11-19 20:00:00,000        800                                                           2021-11-19 23:00:00,000         9000   server2                                       2021-11-19 01:00:00,000         250                                                           2021-11-19 03:00:00,000       480                                                           2021-11-19 08:00:00,000        30000                                                           2021-11-19 09:00:00,000        463                                                           2021-11-19 10:00:00,000      100  
Hi  I got this error when I search on specific index. index="myindex" Error in 'IndexScopedSearch': The search failed. More than 1000000 events found at time 1637181000.   1-what is the reason ... See more...
Hi  I got this error when I search on specific index. index="myindex" Error in 'IndexScopedSearch': The search failed. More than 1000000 events found at time 1637181000.   1-what is the reason of this error? 2-how can I resolve this issue?   Any idea? Thanks,      
Hello, I have a situation where there are 25 different groups  of people and each group of people uses different sets of data (i.e. there are 25 different sets of data, one for each group). We need ... See more...
Hello, I have a situation where there are 25 different groups  of people and each group of people uses different sets of data (i.e. there are 25 different sets of data, one for each group). We need to create a single dashboard and also need to assign access role in a way that each group can only have access to their data set using the same dashboard.   How I would assign the access role for those 25 different groups.  Should I create 25 different indexes - one for  each group  and assign access role to those indexes? Is it possible to create 25 different sourcetypes and assign access role to those sourcetypes  - one for each group and keep all sourcetypes under one index? Or are there any other ways? Thank you so  much, any help will be highly appreciated.     
We have specific ID's that track how request process through the system. What I want to do search for all these ID's in one index and then join all other index's to see how these funnel down. So esse... See more...
We have specific ID's that track how request process through the system. What I want to do search for all these ID's in one index and then join all other index's to see how these funnel down. So essentially track how a request tracks through our system, and then get all the service requests associated. I was thinking something like:    index=foo |fields service, ID | join ID [index=*] | stats service  
I am trying to correlate 2 different logs one is in EST and the is in UTC.  The UTC logs, I have tried to specific the time zone as UTC in props and then let my user timezone preference do the conve... See more...
I am trying to correlate 2 different logs one is in EST and the is in UTC.  The UTC logs, I have tried to specific the time zone as UTC in props and then let my user timezone preference do the conversion. Props was only updated on the search head.  [soucretype] TZ = UTC This does not seem to work and i have read different post saying that updating the props file should work but not for historical events.  Is there a way to do this indexed events? 
I have a log sample:  | LRU Config Message from RMQ: {"endpoint":"lru/ config", "data":{"timestamp":1637322539.953,"version":"1","aircraftTailId":"N123JB",[{"lruComponent":"Modem","lruHwPartNumber... See more...
I have a log sample:  | LRU Config Message from RMQ: {"endpoint":"lru/ config", "data":{"timestamp":1637322539.953,"version":"1","aircraftTailId":"N123JB",[{"lruComponent":"Modem","lruHwPartNumber":"123456","lruHwSerialNumber":"C82821190191","lruRevisionNumber":"004","lruMacAddress":true}, {"lruComponent":"Server","lruHwPartNumber":"1244632","lruHwSerialNumber":"F39718480040","lruRevisionNumber":"004","lruMacAddress":null}, What i want to do is extract the date and create a table based on the color i highlighted above.  lruComponent  |  lruHwPartNumber |  Modem                   123456
Hey! We upgraded to Splunk Enterprise Security to the latest version a few weeks ago. Before, it was on Version 4.x I believe. It was detecting events before we upgraded and after the upgrade, no m... See more...
Hey! We upgraded to Splunk Enterprise Security to the latest version a few weeks ago. Before, it was on Version 4.x I believe. It was detecting events before we upgraded and after the upgrade, no more events.   Some of the events that were being triggered were: Brute Force Access Behavior Detected, Geographically Improbable Access Detected, and Threat Activity Detected.   Where can I go to find out why I am not detecting these items after the upgrade? The objects for it are enabled in the Content Management Screen. I will copy and paste what it says below: Statistics Avg. Event Count 0 Avg. Result Count 0 Avg. Run Time 0:00:01 Invocations 24 Skipped 0 Success 24 Update Time Nov 19, 2021 1:00:00 PM The data models have a green checkmark and so do the lookups.    
We stopped receiving data from tenable a few days ago. When I went to investigate I could find nothing that changed. But now we cannot add/edit our tenable accounts without getting "No Tenable.sc Ins... See more...
We stopped receiving data from tenable a few days ago. When I went to investigate I could find nothing that changed. But now we cannot add/edit our tenable accounts without getting "No Tenable.sc Instance at <fqdn:443>". Things I was able to do: Log into tenable with the credentials just fine. Perform a test-netconnection <FQDN> -Port 443 nslookup was good able to ping Things I tried but failed: Use the FQDN and IP address. The app is installed on our Heavy Forwarder which has the Search Head and KVStore roles. Splunk Enterprise versions is 8.2.2.1, Windows server 2019. We were running TA-tenable version 5.0.1 and it wasnt working. I upgraded to TA-tenable 5.2.1 and got the same error. From our logs:     python.log 11-19-2021 07:20:52.558 -0800 ERROR AdminManagerExternal [8132 TcpChannelThread] - Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [400]: Bad Request -- No Tenable.sc Instance at <fqdn>". See splunkd.log/python.log for more details. splunkd.log 11-19-2021 07:20:52.558 -0800 ERROR AdminManagerExternal [18606 TcpChannelThread] - Stack trace from python handler:\n Traceback (most recent call last):\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 151, in init\n hand.execute(info)\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 636, in execute\n if self.requestedAction == ACTION_CREATE: self.handleCreate(confInfo)\n File "/opt/splunk/etc/apps/TA-tenable/bin/ta_tenable/splunktaucclib/rest_handler/admin_external.py", line 40, in wrapper\n for entity in result:\n File "/opt/splunk/etc/apps/TA-tenable/bin/ta_tenable/splunktaucclib/rest_handler/handler.py", line 113, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "/opt/splunk/etc/apps/TA-tenable/bin/ta_tenable/splunktaucclib/rest_handler/handler.py", line 82, in wrapper\n check_existing(self, name),\n File "<string>", line 21, in validate\n File "/opt/splunk/etc/apps/TA-tenable/bin/ta_tenable/splunktaucclib/rest_handler/endpoint/__init__.py", line 82, in validate\n self._loop_fields('validate', name, data, existing=existing)\n File "/opt/splunk/etc/apps/TA-tenable/bin/ta_tenable/splunktaucclib/rest_handler/endpoint/__init__.py", line 78, in _loop_fields\n model.fields,\n File "/opt/splunk/etc/apps/TA-tenable/bin/ta_tenable/splunktaucclib/rest_handler/endpoint/__init__.py", line 77, in <lambda>\n lambda f: getattr(f, meth)(data, *args, **kwargs),\n File "/opt/splunk/etc/apps/TA-tenable/bin/ta_tenable/splunktaucclib/rest_handler/endpoint/field.py", line 51, in validate\n raise RestError(400, self.validator.msg)\nsplunktaucclib.rest_handler.error.RestError: REST Error [400]: Bad Request -- Please enter valid Address, Username and Password or configure valid proxy settings or verify SSL certificate.\n ta_tenable_securitycenter.log 11-19-2021 07:20:52.55,558 ERROR pid=5980 tid=MainThread file=v1.py:_request:497 | Requests Error: HTTPSConnectionPool(host='<fqdn>', port=443): Max retries exceeded with url: /rest/system (Caused by SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1106)')))      I've been troubleshooting this the last few days and any help would be appreciated.