All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey Guys, I have the query below that brings me the values ​​of the fields in a table, however I need that when the field "name_genesys" is equal to the field "user_genesys" cannot be displayed in... See more...
Hey Guys, I have the query below that brings me the values ​​of the fields in a table, however I need that when the field "name_genesys" is equal to the field "user_genesys" cannot be displayed in the table, is there any way to restrict this view?   index=teste | table _time, object_genesys, name_genesys, DBID_genesys, type_genesys, configuration_genesys, user_genesys | sort - _time     Results: _time object_genesys name_genesys DBID_genesys type_genesys configuration_genesys user_genesys 2020-10-15 14:04:11.259 cfg1 default 134452 ConfigurationServer csp243 default 2020-10-15 14:04:09.364 cfg2 123434 43434 Configure agd_tm3 agent1  
Hey Splunkers, hope all is well.  Quick question.  I have a dashboard that I'd like to automatically email out to a few recipients on a monthly basis.  Is it possible to automate this with a dashboar... See more...
Hey Splunkers, hope all is well.  Quick question.  I have a dashboard that I'd like to automatically email out to a few recipients on a monthly basis.  Is it possible to automate this with a dashboard?  Suggestions are greatly welcomed.  Thanks.
I am looking into ways to pull alerts for events 4723 and 4724 then send an email to the targeted user who's password was changed.  My experience with Splunk is limited, as i can get a generic alert... See more...
I am looking into ways to pull alerts for events 4723 and 4724 then send an email to the targeted user who's password was changed.  My experience with Splunk is limited, as i can get a generic alert set up to email me when it see's one of those event id's but that's it. What I want: Monitor for password changes Send an email to the user in question telling them that their password was changed (by themselves (4723)or by an admin(4724) The email would include the name of admin who changed it or just the user's display name with a generic message saying password was changed. I have the Splunk Supporting Add-on for Active Directory modeule which i am guessing would help?   Thanks    
here is how my base search output looks: name version browser runTime call1 alpha chrome 75 call1 beta chrome 48 call2 alpha firefox 30 call2 beta chrome 78 call3 b... See more...
here is how my base search output looks: name version browser runTime call1 alpha chrome 75 call1 beta chrome 48 call2 alpha firefox 30 call2 beta chrome 78 call3 beta firefox 56 I'm looking for a distinct list of "name, browser" that exclusively belongs to "beta" version. Getting count and median values is a bonus. Here is the desired output: name version browser count(runTime) median(runTime) call2 beta chrome 1 30 call3 beta firefox 1 56 This is what I have so far: But this only gives me a diff. It can return alpha calls if those are not present in beta version. I'm looking for ONLY beta calls.    baseSearch | stats dc(version) as found_in_versions BY name, platform | where found_in_versions < 2    Any help would be appreciated!
Hello! I'm trying to use the last version of Dashboards App, but for any 'table' or 'single value' object, it returns an info "updated x minutes ago". I don't wanna keep it in my dash. How can I re... See more...
Hello! I'm trying to use the last version of Dashboards App, but for any 'table' or 'single value' object, it returns an info "updated x minutes ago". I don't wanna keep it in my dash. How can I remove this?   Tks!
Greetings!  I am new to Splunk and I am trying to learn it so please take it easy on me I setup an environment with a Kali VM(This is where Splunk Enterprise is setup), a Windows 10 Enterprise VM... See more...
Greetings!  I am new to Splunk and I am trying to learn it so please take it easy on me I setup an environment with a Kali VM(This is where Splunk Enterprise is setup), a Windows 10 Enterprise VM and a Windows Server 2019 VM.  I setup the Universal Forwarder on Windows 10 and when I go to Splunk I can see it listed as a "Host", I also setup the Kali VM to send its logs to Splunk and I see it listed as a "Host" as well.  However, the logs coming from the Windows Server 2019(setup as a Domain Controller) are not showing up as a "Host", it seems to be merged in with one of the other "Hosts". It is my understanding that any logs coming in from the Server should show up as a different Host so I should see the Kali VM as a Host, the Windows 10 VM as a Host and the same for Server 2019, however, as I explained, it is not showing up as a Host. If anybody is willing to help, please let me know what information you would like me to share.   Thank you in advance.   Kirk
All, I'm working on extracting some key info out of an Ansible HEC collector.  I'm hoping to use json_extract stuff like run time, machine etc. The data shows up in Search  The data is formatted ... See more...
All, I'm working on extracting some key info out of an Ansible HEC collector.  I'm hoping to use json_extract stuff like run time, machine etc. The data shows up in Search  The data is formatted in proper json "tree" view and color coding in Search. Ansible app uses the _json source type.  When I tried to use . ...| eval foo = json_extract(<objectname>) | table foo I can only get it show values for the first object in the list.  After many hours of fiddling around I decided to see if I could get json_extract to work in a simpler scenario. I decided to try out the "cities" example from the Splunk online Dovs https://docs.splunk.com/Documentation/SCS/current/SearchReference/JSONFunctions    I ingested the example below as a file. I did NOT use _json source type so no index field extractions we should just have the raw JSON below. { "cities": [ { "name": "London", "Bridges": [ { "name": "Tower Bridge", "length": 801 }, { "name": "Millennium Bridge", "length": 1066 } ] }, { "name": "Venice", "Bridges": [ { "name": "Rialto Bridge", "length": 157 }, { "name": "Bridge of Sighs", "length": 36 }, { "name": "Ponte della Paglia" } ] }, { "name": "San Francisco", "Bridges": [ { "name": "Golden Gate Bridge", "length": 8981 }, { "name": "Bay Bridge", "length": 23556 } ] } ] } I then try the following statement from the Splunk Doc ...| eval extract_cities = json_extract(cities) | table extract_cities I get nothing. The example says I should get this below. I'm on Splunk 8.0.6. Is this a bug?  This is the first time I've had to work with JSON on this box. Many thanks in advance for the help.
Hi Splunker, Today, I refer to this link https://wiki.splunk.com/Community:40GUIDevelopment  to develop the interface. I use the python library requests to upload the file, and use the CherryPy ... See more...
Hi Splunker, Today, I refer to this link https://wiki.splunk.com/Community:40GUIDevelopment  to develop the interface. I use the python library requests to upload the file, and use the CherryPy of Splunk to receive the file. But I can't receive and save the file. I printed the parameter content, but I didn't receive any parameter. Here is my code。   I use the python library requests to upload the file     def forwardFile(self, filePath, fileName, url): files = {'ufile': open(os.path.join(filePath, fileName), 'rb')} data = {'xxx':'aaa','xxdd':'bbb'} headers = {} headers["content-type"] = 'multipart/form-data' r = requests.post(url, files=files,data=data ,headers=headers, verify=False) logger.info(r)     use the CherryPy of Splunk to receive the file     @route('/:action=receive') @expose_page(must_login=False, methods=['GET','POST']) @cherrypy.expose def receive(self, **kwargs): logger.info("111") logger.info(kwargs) logger.info("222")     then i see the file web_service.log      2020-10-15 23:21:07,472 INFO [5f8868e36d107834250] <string>:68 - hello world:http://127.0.0.1 2020-10-15 23:21:07,501 INFO [5f8868e37f107d00ad0] <string>:145 - 111 2020-10-15 23:21:07,501 INFO [5f8868e37f107d00ad0] <string>:146 - {'action': 'receive'} 2020-10-15 23:21:07,501 INFO [5f8868e37f107d00ad0] <string>:147 - 222 2020-10-15 23:21:07,507 INFO [5f8868e36d107834250] <string>:79 - <Response [200]>     How to get the value of ufile ? then i can save it.
Hello, we are trying to parse logs from a dlink DXS-3600 but we are not able to find the correct format, we have tried with several versions of syslog but it does not work. Does anyone know what fo... See more...
Hello, we are trying to parse logs from a dlink DXS-3600 but we are not able to find the correct format, we have tried with several versions of syslog but it does not work. Does anyone know what format these logs are in? Oct 15 15:36:04 10.68.16.16 Oct 15 13:36:04 10.68.16.16 INFO: Successful login through Web (Username: Gestion, IP: 10.168.0.53) Oct 15 15:41:45 10.68.16.16 Oct 15 13:41:44 10.68.16.16 INFO: Unit 1, Configuration uploaded by WEB successfully. (Username: Gestion, IP: 10.168.0.53, MAC: 00-00-00-00-00-00, Server IP: 10.168.0.53, File Name: running-config.cfg) Oct 15 15:42:13 10.68.16.16 Oct 15 13:42:14 10.68.16.16 INFO: Unit 1, Configuration uploaded by WEB successfully. (Username: Gestion, IP: 10.168.0.53, MAC: 00-00-00-00-00-00, Server IP: 10.168.0.53, File Name: running-config.cfg) Oct 15 15:42:33 10.68.16.16 Oct 15 13:42:34 10.68.16.16 INFO: Unit 1, Configuration uploaded by WEB successfully. (Username: Gestion, IP: 10.168.0.53, MAC: 00-00-00-00-00-00, Server IP: 10.168.0.53, File Name: startup-config.cfg) Oct 15 15:42:44 10.68.16.16 Oct 15 13:42:44 10.68.16.16 INFO: Unit 1, Configuration uploaded by WEB successfully. (Username: Gestion, IP: 10.168.0.53, MAC: 00-00-00-00-00-00, Server IP: 10.168.0.53, File Name: startup-config.cfg)   Greetings
Hello Experts, I do have a search with multiple appends on a Dashboard Panel  , which is taking longer than usual to generate the results. There is dropdown in the panel with Month-Year values gene... See more...
Hello Experts, I do have a search with multiple appends on a Dashboard Panel  , which is taking longer than usual to generate the results. There is dropdown in the panel with Month-Year values generated , which is feeded into the search as variable when end-user selects one. Drop Down Values are as shown below -  ( 3 Months Interval )  Jan 2021 Apr 2021 Jul 2021 Oct 2021 Jan 2022 The search is feeded with one of the values from the above as shown below  .. |search ..| fields MonthYear,field1,field2,field3 | where MonthYear="$month$" | table field1,field2,field3. Given the search is taking longer amount of time , we plan to schedule it through a report and run once a Day and store the results beforehand . Will it be possible to schedule the report run with all the above mentioned "MonthYear" values ? how can we achieve the same ? Any leads is appreciated  , Thanks                            
Hi,  i faced a little issue when i configured " Identities and assets" . After the configuration, the Asset Center and Identity Center dashboard in ES do not work. knowing that : The assets.csv and... See more...
Hi,  i faced a little issue when i configured " Identities and assets" . After the configuration, the Asset Center and Identity Center dashboard in ES do not work. knowing that : The assets.csv and identities.csv lookup table under 'Identity Management  : OK These lookup table is in 'Enabled' state : OK Why this behavior occurred? and how make it to take results from assets.csv and identities.csv. Please help me in that.      
Hi Splunk Support team,   Would like to check with you, currently we have an customer have Valid License running on Splunk application in a server. And due to current server is old, we planning to... See more...
Hi Splunk Support team,   Would like to check with you, currently we have an customer have Valid License running on Splunk application in a server. And due to current server is old, we planning to setup a new splunk application in a new Server.   We are going to perform batch by batch migration is 6 months time. Is it both license still can be use concurrently while we doing migration ?  
Hi Team, Enterprise v8.0.6 on W10 platform (Swedish OS) ITSI 4.4.5 on top of that. Checked the Known Issues in rel notes for 4.4.5 Background: Looking in ITSI Health Check dash board I noticed t... See more...
Hi Team, Enterprise v8.0.6 on W10 platform (Swedish OS) ITSI 4.4.5 on top of that. Checked the Known Issues in rel notes for 4.4.5 Background: Looking in ITSI Health Check dash board I noticed that the  itsi_event_grouping search always fail. (Starts to run but then fails) After some troubleshooting I came across a java exception in itsi_rules_engine.log: 2020-10-15 09:59:30,365 INFO [itsi_re(reId=KJo1,reMode=RealTime)] [main] RulesEngineSearch:52 - RulesEngineTask=RealTimeSearch, Status=Stopped, FunctionMessage="java.lang.NumberFormatException: For input string: "1602698533,696" at sun.misc.FloatingDecimal.readJavaFormatString(Unknown Source) at sun.misc.FloatingDecimal.parseDouble(Unknown Source) at java.lang.Double.parseDouble(Unknown Source) at com.splunk.itsi.rule.engine.core.utils.CommonUtils.createGroup(CommonUtils.java:747) at com.splunk.itsi.rule.engine.core.utils.CommonUtils.getRestorableGroupsFromEvents(CommonUtils.java:705) at com.splunk.itsi.rule.engine.core.TaskManager.restoreGroupState(TaskManager.java:1199) at com.splunk.itsi.rule.engine.core.TaskManager.preProcessing(TaskManager.java:1285) at com.splunk.itsi.rule.engine.core.TaskManager.startStreaming(TaskManager.java:1329) at com.splunk.itsi.search.chunk.RulesEngineSearch.main(RulesEngineSearch.java:50) Ok, to find out where  the input string: "1602698533,696"  come from Back to the itsi_rules_engine.log file. Some lines above the ERROR there is a "groupInfosearch" started: 2020-10-15 09:59:29,954 INFO [itsi_re(reId=1zMs,reMode=RealTime)] [main] TaskManager:344 - FunctionName=RunSplunkSearch, SearchName=groupInfoSearch, Status=Started (Full SearchQueryText below)  Stripping the search query I could find events from KPI alerts that had this value. In the:  itsi_first_event_time: 1602698533,696 Question: How can I get rid of this value? Or work around so the job can complete successfully? Since it is there in an event and the itsi_event_group runs over All time(real-time)  my conclusion is that this job will always fail when it encounter this itsi_first_event_time value Greatful for any inpput on this. Kind Regards TobbeP   --------------------- This is the SearchQueryText="earliest=-24h latest=now _index_earliest=null _index_latest=null allow_partial_results=false search `itsi_event_management_group_index_with_close_events` | stats max(itsi_group_count) as itsi_group_count values(itsi_is_last_event) as itsi_is_last_event max(itsi_last_event_time) as itsi_last_event_time first(itsi_parent_group_id) as itsi_parent_group_id first(itsi_policy_id) as itsi_policy_id first(itsi_split_by_hash) as itsi_split_by_hash first(itsi_first_event_id) as itsi_first_event_id min(itsi_first_event_time) as itsi_first_event_time min(itsi_earliest_event_time) as itsi_earliest_event_time latest(itsi_group_assignee) as itsi_group_assignee latest(itsi_group_description) as itsi_group_description latest(itsi_group_severity) as itsi_group_severity latest(itsi_group_status) as itsi_group_status latest(itsi_group_ace_template_id) as itsi_group_ace_template_id latest(itsi_group_title) as itsi_group_title by itsi_group_id | where itsi_is_last_event!="true" | sort 0 -itsi_last_event_time | lookup itsi_notable_group_user_lookup _key AS itsi_group_id OUTPUT owner severity status | lookup itsi_notable_group_system_lookup _key AS itsi_group_id OUTPUT is_active | where is_active=1 | eval itsi_group_assignee=coalesce(owner, itsi_group_assignee), itsi_group_severity=coalesce(severity, itsi_group_severity), itsi_group_status=coalesce(status, itsi_group_status)"  
Hi,       when I try to access the mgmt port on the browser it's not accessible. I have enabled SSL , so i removed SSL to check if thats the issue but still no luck. mgmt url is accessible only with... See more...
Hi,       when I try to access the mgmt port on the browser it's not accessible. I have enabled SSL , so i removed SSL to check if thats the issue but still no luck. mgmt url is accessible only within the installed server. checked all the ports from firewall 8089 is allowed. anything wrong in below config? Web.conf [settings] httpport = 29092 loginDocumentTitleOption = custom loginDocumentTitleText = Login | Test loginFooterOption = custom loginFooterText = Copyright ©2020 Test Inc. All Rights Reserved loginPasswordHint = hello enableSplunkWebSSL = 1 privKeyPath = /home/ubuntu/splunk/splunk/etc/auth/mycerts/privatekey.key serverCert = /home/ubuntu/splunk/splunk/etc/auth/mycerts/allcertpem.pem
In Access Management - Role - available search indexes , only 1000 indexes are shown while we have more than 1000 indexes. Is there a way to increase this limit ?  We run splunk 7.3.3
Dear community, is there a way to import dashboards and their contents (panels and visualisations) from my local Splunk instance into a central Splunk server instance? And if so, how do I have to p... See more...
Dear community, is there a way to import dashboards and their contents (panels and visualisations) from my local Splunk instance into a central Splunk server instance? And if so, how do I have to proceed? My dashboard and the contents are readable for everyone. Thanks and best regards,   Dietmar
I have logs like this: user=userA ip=1.1.1.1 ... user=userA ip=1.1.1.2 ... user=userB ip=1.1.2.1 ... user=userB ip=1.1.2.1 ... user=userC ip=1.1.3.1 ... user=userC ip=1.1.3.2 ... user=userC ip... See more...
I have logs like this: user=userA ip=1.1.1.1 ... user=userA ip=1.1.1.2 ... user=userB ip=1.1.2.1 ... user=userB ip=1.1.2.1 ... user=userC ip=1.1.3.1 ... user=userC ip=1.1.3.2 ... user=userC ip=1.1.3.3 ... Now I want to have a list of all users with their IPs and the count of the different IPs. First I do this: ====== search foobar | stats values(user) by ip ====== Result is: userA 1.1.1.1 1.1.1.2 userB 1.1.2.1 userC 1.1.3.1 1.1.3.2 1.1.3.3   How do I count and display the IPs? It should look like this: userA 1.1.1.1 1.1.1.2 2 userB 1.1.2.1 1 userC 1.1.3.1 1.1.3.2 1.1.3.3  3
Does anyone know where I can find an Ansible playbook for deploying Splunk UF's on Windows servers. I have found several for the deployment on Linux/NIX servers, but having a lot of trouble finding a... See more...
Does anyone know where I can find an Ansible playbook for deploying Splunk UF's on Windows servers. I have found several for the deployment on Linux/NIX servers, but having a lot of trouble finding a playbook for Windows. I checked Galaxy as well, but not luck unfortunately.
Hello When I run the search below, it returns random results! Sometimes, 1 event is displayed and a few minutes after there is no events returned And sometimes, it's the same event returned except... See more...
Hello When I run the search below, it returns random results! Sometimes, 1 event is displayed and a few minutes after there is no events returned And sometimes, it's the same event returned excepted the _time field of the vent which is not the same for even so the same hostname! [| inputlookup host.csv | table host | rename host as USERNAME ] `wire` earliest=-30d latest=now | fields USERNAME SNR RSSI | eval USERNAME=upper(USERNAME) | eval time=strftime(_time,"%Y-%m-%d %H:%M") | search USERNAME=NTTA* | lookup all.csv HOSTNAME as USERNAME output SITE DESCRIPTION_MODEL BUILDING_CODE ROOM | stats last(time) as "Event time" last(RSSI) as RSSI, last(SNR) as SNR, last(DESCRIPTION_MODEL) as Model, last(SITE) as Site, last(BUILDING_CODE) as Building last(ROOM) as Room by USERNAME | where (RSSI >= "-72" AND RSSI <= "-77") AND SNR <= "15" | rename USERNAME as Hostname | table "Event time" Hostname RSSI SNR Model Site Building Room   How explain this please??  
We are checking NIFI option o send data to splunk cloud HEC via proxy, Can you Please suggest