All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi team, I registered for a free trial and received the corresponding emails containing portal access information and password set-up procedure. The environment was in the process of being set up, ... See more...
Hi team, I registered for a free trial and received the corresponding emails containing portal access information and password set-up procedure. The environment was in the process of being set up, but it never got to the end. Neither have I received the connection details to the controller. Could you kindly assist/help? -Igor
hi pls am having problem viewing the indexes i created in my clustered environment. They were all created on the cluster manager ..._cluster and also same on the deployer but when i try to search the... See more...
hi pls am having problem viewing the indexes i created in my clustered environment. They were all created on the cluster manager ..._cluster and also same on the deployer but when i try to search them i don't get to see any of them. When i tried to check the indexer GUI i see them under indexes but not on the seacrhhead GUI. What am i doing wrong? Also i installed a TA (add-on for unix and linux) and tried to use one of the monitor stanza as a input on the DS; yet still not working. My serverclasses are fine. Below is the stanza i copied from the TA which i used in my inputs.conf in the local folder of the TA under deployment apps. Kindly assist. Thanxx 
Hi everyone could any one help me to know if splunk is able to integrate with google analytics.  Thanks in advance for any comment related it.
I know with Splunk Dashboard Studio, conditional dashboard on dropdown choice aren't a possibility anymore, but is it possible to make the data source used by the dashboard conditional on the dropdow... See more...
I know with Splunk Dashboard Studio, conditional dashboard on dropdown choice aren't a possibility anymore, but is it possible to make the data source used by the dashboard conditional on the dropdown choice? That way, the dashboard could update dynamically.
I'm trying to create table with the top 5 results split into columns, so that I can have multiple results per line, grouped by date. Here's what I have: |union [search index=Firewall BlockFromBadA... See more...
I'm trying to create table with the top 5 results split into columns, so that I can have multiple results per line, grouped by date. Here's what I have: |union [search index=Firewall BlockFromBadActor| top src_ip by Date limit=5 | rename count as IPCount] [search index=Firewall BlockFromBadActor| top dest_port by Date limit=5 | rename count as PortCount] | stats values(*) as * by Date | fields Date,src_ip,IPCount,dest_port,PortCount Date src_ip IPCount dest_port PortCount 2022/11/25 1.1.1.1 2.2.2.2 3.3.3.3 4.4.4.4 5.5.5.5 5000 4000 3000 2000 1000 1 2 3 4 5 5000 4000 3000 2000 1000 2022/11/24 1.1.1.1 2.2.2.2 3.3.3.3 4.4.4.4 5.5.5.5 5000 4000 3000 2000 1000 1 2 3 4 5 5000 4000 3000 2000 1000   What I'm trying to get Date IP 1 IP1 Count IP 2 IP 2 Count Port 1 Port 1 Count Port 2 Port 2 Count 2022/11/25 1.1.1.1 5000 2.2.2.2 4000 1 5000 2 4000 2022/11/24 1.1.1.1 5000 2.2.2.2 4000 1 5000 2 4000 I cannot seem to find any way to make the individual query results into new columns.
Happy Friday Splunkers,   We are attempting to on board data from the Salesforce but after reviewing the _internal index we are receiving multiple errors for deprecated functions that come defaul... See more...
Happy Friday Splunkers,   We are attempting to on board data from the Salesforce but after reviewing the _internal index we are receiving multiple errors for deprecated functions that come default within the add-on. Has any one else experience anything similar to this or is it possible there is a misconfiguration somewhere?   From the looks of it a python script will need to be modified.       11-25-2022 12:12:48.735 -0500 ERROR PersistentScript - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_salesforce/bin/Splunk_TA_salesforce_rh_account.py persistent}: return func(*args, **kwargs) 11-25-2022 12:12:48.735 -0500 ERROR PersistentScript - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_salesforce/bin/Splunk_TA_salesforce_rh_account.py persistent}: /opt/splunk/etc/apps/Splunk_TA_salesforce/lib/solnlib/utils.py:153: UserWarning: _get_all_passwords is deprecated, please use get_all_passwords_in_realm instead.      
When navigating to google_drive_setup dashboard in SplunkCloud, I get the following HTML Error: common.js:1851 TypeError: Cannot set properties of null (setting 'ondragover') at... See more...
When navigating to google_drive_setup dashboard in SplunkCloud, I get the following HTML Error: common.js:1851 TypeError: Cannot set properties of null (setting 'ondragover') at i.setupDragDropHandlerOnElement (eval at _runScript (dashboard_1.1.js:347:86275), <anonymous>:55:32) at i.setupDragDropHandlers (eval at _runScript (dashboard_1.1.js:347:86275), <anonymous>:46:16) at i.initialize (eval at _runScript (dashboard_1.1.js:347:86275), <anonymous>:36:16) at t.View (common.js:1506:229444) at i.constructor (common.js:1851:1033344) at i [as constructor] (common.js:1506:236387) at new i (common.js:1506:236387) at eval (eval at _runScript (dashboard_1.1.js:347:86275), <anonymous>:349:23) at Object.execCb (eval at e.exports (common.js:629:64344), <anonymous>:1658:33) at Module.check (eval at e.exports (common.js:629:64344), <anonymous>:869:55) Since I am on SplunkCloud, I dont have access to create a passwords.conf manually. I am running 9.0.2 SplunkCloud. I don't get any errorlogs in Splunk _internal related to the page. Does anyone have a solution for resolving this? @LukeMurphey?
Hi All, getting following error in splunk: "Events may not be returned in sub-second order due to search memory limits . See search.log for more information. settings: [search]:max_rawsize_perchu... See more...
Hi All, getting following error in splunk: "Events may not be returned in sub-second order due to search memory limits . See search.log for more information. settings: [search]:max_rawsize_perchunk" when i am searching for paticular time range like : 4 to 8 i am getting this error. but if i search for last 15 mins or 24 hours or last 7 days i am not getting the error. I understood : that between 4 to 8 timerange there where lot events coming for one second. 1. below are my  props configured and sample logs: 20221012453012 20220812453012 20220912453012 20220612453012 H1S98765~~PR~;R ESC~AB~Thu Oct 12 12:34:56 IST 2022~B~1.22~2.22~3456.98~GF~4356BV H1S98765~~PR~;Z ESC~AB~Thu Oct 12 12:34:56 IST 2022~B~1.22~2.22~3456.98~GF~4356BV H1S98765~~PR~;M ESC~AB~Thu Oct 12 12:34:56 IST 2022~B~1.22~2.22~3456.98~GF~4356BV H1S98765~~PR~;T ESC~AB~Thu Oct 12 12:34:56 IST 2022~B~1.22~2.22~3456.98~GF~4356BV [logs:health:app] truncate=10000 time_prefix=(?:[^~]+~)~(?:[^~]+~){3} time_format=%a %b %d %H: %M: %S  %Z disable=false max_timestamp_lookahead=75 charset=UFT_8 no_binary_check=true datetime_config=CURRENT should_linenerge=false line_breaker=([\r\n]+)\w{8}~~ annotate_punct=false   2. below are my  props configured and sample logs: [10/07/22 12:55:40"7451 IST] 89786545 medapplog  9[10/07/22 12:55:40"7451 IST-897654] [app=med, sucees=0, failed=10, validpoints=100]  the events are assocuiated with the med application user=app client=med [08/07/22 12:55:40"7451 IST] 89786545 medapplog  9[10/07/22 12:55:40"7451 IST-897654] [app=med, sucees=0, failed=10, validpoints=100]  the events are assocuiated with the med application user=app client=med [10/12/22 12:55:40"7451 IST] 89786545 medapplog  9[10/07/22 12:55:40"7451 IST-897654] [app=med, sucees=0, failed=10, validpoints=100]  the events are assocuiated with the med application user=app client=med [logs:med:app] time_prefix=^\[ time_format=%m %d %y  %H: %M: %S: %3Q  %Z max_timestamp_lookahead=30 should_linenerge=false line_breaker=([\r\n]+)\[\d{1,2}\/\d{1,2}\/\d{2}\s\d{1,2}:\d{2}:\d{2}:\d{3}\s\D{3}\] truncate=99999 please let me know how to avoid this error coming when i search.
Hello, fellow splunkers!   What I am trying to do is to detect a failed login attempts followed by root password change in linux with correlation search or datamodel search?
Hi everybody,  let's say I'm monitoring the file test.log that has these informations: 2022-22-25 14:00 - row 1 2022-22-25 14:00 - row 2 2022-22-25 14:03 - row 3 2022-22-25 14:05 - row 4   At ... See more...
Hi everybody,  let's say I'm monitoring the file test.log that has these informations: 2022-22-25 14:00 - row 1 2022-22-25 14:00 - row 2 2022-22-25 14:03 - row 3 2022-22-25 14:05 - row 4   At some point, I overwrite the original file with another test.log with these lines 2022-22-25 14:00 - row 1 2022-22-25 14:00 - row 2 2022-22-25 14:03 - row 3 2022-22-25 14:05 - row 4 2022-22-25 17:10 - row 5 2022-22-25 17:10 - row 6   Currently, all the lines of the new test.log are ingested so I have some duplicates. Is there a way to only index the last to rows?
Hi everyone,   I want to join 3 sources from the same inidex. The Problem is, that with join i lose Date because im over 50.000 results in the subsearch. So i try to get my table over the "normal... See more...
Hi everyone,   I want to join 3 sources from the same inidex. The Problem is, that with join i lose Date because im over 50.000 results in the subsearch. So i try to get my table over the "normal" search.   Logic is like the picture: The source "NAS" is a reported fault on a specific Production-number (PRODNR). it includes the Productionnumber, the timestamp of the detection and a clear ID (SNSM - for every fault) with the Partcode of the fault part.  The "NAU" is the data of the processed/closed defect. Problem here is as you can see that the columns in the sources have the same names.    The MP is the number of the process Step. so every source contains the PRODNR.  The NAS and NAU contain the SNSM IDs.  So i want to join the NAU ans NAS by the "SNSM" IDs and see if they alsready passed the Progress step 6 and if a fault was proccessed before the step 6 or if it was open the time the Production Number passed the Step 6.  my search that works is as shown. But its limited to the 50.000 results.  i try to to make it with index=pfps-k sourcetype=NAS OR sourcetype=NAU OR sourcetype=MP.  I get all the data but i cant do the same like the join so compare the SNSM IDs and then the Productionstep index=pfps-k sourcetype=NAS ( PRODNR="1*" OR PRODNR="2*" ) |where 'SPERRE' like ("PZM51%") |dedup PRODNR,PRUEFUNG |join type=left max=0 left=NAS right=NAU where NAS.SNSM=NAU.SNSM [search index=pfps-k sourcetype=NAU ( PRODNR="1*" OR PRODNR="2*" ) |dedup SNSM] |join type=left max=0 left=L right=MP where L.NAS.PRODNR=MP.PRODNR [search index=pfps-k sourcetype="MP" earliest=@d+6h |where MELDEPUNKT=6.0 |where like(PRODNR,"1%") OR like(PRODNR,"2%")]
Hi, I have to add some custom action option in NEAP for ticket creation of upcoming notable events. I have APIs and script ready for ticket creation, just want to call those APIs from Splunk UI thr... See more...
Hi, I have to add some custom action option in NEAP for ticket creation of upcoming notable events. I have APIs and script ready for ticket creation, just want to call those APIs from Splunk UI through NEAP option. Please refer the below screenshot, where I want to add one more action. What exactly we need to do in the backend to add one new action in NEAP. Your responses will be appreciated.
Hi at all, I'm configuring Enterprise Security but I found an unattended issue: I'm trying to use the Maps feature associated to a Source in "Incident Review" dashboard. In details: I have some ... See more...
Hi at all, I'm configuring Enterprise Security but I found an unattended issue: I'm trying to use the Maps feature associated to a Source in "Incident Review" dashboard. In details: I have some Notebles, much of them contain an IP external to the customer and I'd like to visualize the geographic origin of this IP, using the Maps feature associated to the Additional Fields contained in Notable details, but when I click on the mouse right button and I choose the "map <IP address> option, it opens Google Maps but always at the same coordinates that aren't the ones I'm searching. Must I configure something to have this feature or did someone else experience the same issue? Thank you for your attention. Ciao. Giuseppe
Hello, I use Splunk as Indexer and deployment server und I have one universal forwarder installed.  I'm getting an error, when Splunk forwarder tries to read one log file:  Ignoring file '/mnt/scn... See more...
Hello, I use Splunk as Indexer and deployment server und I have one universal forwarder installed.  I'm getting an error, when Splunk forwarder tries to read one log file:  Ignoring file '/mnt/scn_data/log.txt' due to: binary It works, after I put a file props.conf into app folder on the forwarder. [cx_scan_logs] CHARSET = UTF-16LE LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom But after I made changes on the index server, the files on the forwarder like inputs.conf are updated and  props.conf is deleted. And I get the error again.  How can I say the Splunk not to delete the props.conf on the forwarder?
I'm a Splunk PS consultant and have been assisting a client with upgrades and migration to SVA compliant architecture (C1). All well and fully operational on 9.0.2 and the client is happy with this i... See more...
I'm a Splunk PS consultant and have been assisting a client with upgrades and migration to SVA compliant architecture (C1). All well and fully operational on 9.0.2 and the client is happy with this improved and fully compliant deployment. Following up from the works we reviewed what sensible security hardening could be implemented across the deployment and we agreed that the pass4SymmKey for the clustering stanza could be longer and more complex. We followed the docs and went to each instances' $SPLUNK_HOME/etc/system/local/server.conf and updated the key in plain text. We restarted the Splunkd daemon via Systemd on all instances and checked the infra. All functional and the cluster remains operating properly, ingesting data, clustering operations correct. However... there is one flaw and that is the MC. That is no longer able to properly query the cluster, it has the DS on it as well and that is properly working and serving apps to clients. It has all the search parameters correct and all nodes listed and was functional immediately before rotation. Yes, I checked btool for the values on disk and decrypted it, all appears fine. After an hour of troubleshooting and checking spkunkd.log there was still no clue but we thought perhaps we had gone too complex on the string with special characters. Rinse and repeat updating all cluster nodes pass4SymmKey to something less complex without special chars. Still failed to operate properly and we spent another hour very carefully reviewing every stanza in operation and consistency. We then decided to try and setup an MC on another node to compare, same exact issue and all checks just come back as greyed out. Time pushing on we decided to revert to the original pass4SymmKey and restart daemon, guess what, still not working. We moved onto other pressing matters but I do not want to leave my client without an answer or approach medium term. Potential for a bug? niche operation rotating pass4SymmKey?
Hi at all, I tried to customize the Incident Review Dashboard to display some additional fields as user, src or dest, as described in the Enterprise Security Admin course. At first I found that t... See more...
Hi at all, I tried to customize the Incident Review Dashboard to display some additional fields as user, src or dest, as described in the Enterprise Security Admin course. At first I found that to have these fields in the Additional Fields, I must add them also to the main dashboard columns, otherwise the additional field isn't displayed, and this is already something not documented. But the problem is that this field is displayed only for some Notables and not for all (as i waited), I also found that the src field is present in all the Notables (except risk based notables), instead user and dest (the most important is user that should be always present) sometimes are present and sometimes not. I supposed that the issue was in the Correlation Search that doesn't add this field to the Notable but opening the Notable with the contributing events link the field is always present. Had someone else experienced this issue? Thank you for your attention. Ciao. Giuseppe
Hello splunk lovers! i want help with date field and i want fast. i have field, format example: data_started  01.01.2016 0:00:00 AND i want to take from field date_started  only year, like 2016. ... See more...
Hello splunk lovers! i want help with date field and i want fast. i have field, format example: data_started  01.01.2016 0:00:00 AND i want to take from field date_started  only year, like 2016. please help! 
[user]$ sudo rpm -U --prefix=/opt/splunk splunk-9.0.1-82c987350fde-linux-2.6-x86_64.rpm error: splunk-9.0.1-82c987350fde-linux-2.6-x86_64.rpm: not an rpm package (or package manifest): note: opt/sp... See more...
[user]$ sudo rpm -U --prefix=/opt/splunk splunk-9.0.1-82c987350fde-linux-2.6-x86_64.rpm error: splunk-9.0.1-82c987350fde-linux-2.6-x86_64.rpm: not an rpm package (or package manifest): note: opt/splunk is my splunk binary location of my HF please advise.
Hi , I am facing difference in count between stats and timechart for same search and same filters Stats cmd : Last 24 hours search|bin span=1d _time |stats count by Status|eventstats sum(*) as sum... See more...
Hi , I am facing difference in count between stats and timechart for same search and same filters Stats cmd : Last 24 hours search|bin span=1d _time |stats count by Status|eventstats sum(*) as sum_* |foreach * [eval "Comp %"=round((count/sum_count)*100,2)]|rename count as Count|fields - sum_count comp  7126 error 37 Noncomp 146 NonRep 54 Total 7363 Timechart :  Last 30 days  search|bin span=1d _time |timechart count by Status| addtotals| eval "Comp %"=round((Comp/Total)*100,2) | eval "Error %"=round((Error/Total)*100,2) | eval "Noncomp %"=round((Noncomp/Total)*100,2) | eval "NonRep %"=round((NonRep/Total)*100,2) | fields _time,*% comp  7126 error 36 Noncomp 146 NonRep 53 Total 7361 There is difference in count by 2 between these 2 functions.I am using a macro before the time chart or stats .Please help me with solution or cause of this issue.   
I have a scenario where i want to expand the field and show as individual events. Below is my query, which works fine for smaller intervals of time, but larger intervals its not efficient. index=... See more...
I have a scenario where i want to expand the field and show as individual events. Below is my query, which works fine for smaller intervals of time, but larger intervals its not efficient. index=app_pcf AND cf_app_name="myApp" AND message_type=OUT AND msg.logger=c.m.c.d.MatchesApiDelegateImpl | spath "msg.logMessage.matched_locations{}.locationId" | search "msg.logMessage.numReturnedMatches">0 | mvexpand "msg.logMessage.matched_locations{}.locationId" | fields "msg.logMessage.matched_locations{}.locationId" | rename "msg.logMessage.matched_locations{}.locationId" to LocationId | table LocationId I have a json array called matched_locations which has field locationId. I can have atmost 10 locationIds in a matched_locations I have thousands of events in the duration which will have this matched_locations json array. Below is example of one such event with bunch of matched_locations ########################################################### cf_app_name: myApp cf_org_name: myOrg cf_space_name: mySpace job: diego_cell message_type: OUT msg: { application: myApp correlationid: 0.af277368.1669261134.5eb2322 httpmethod: GET level: INFO logMessage: { apiName: Matches apiStatus: Success clientId: oh_HSuoA6jKe0b75gjOIL32gtt1NsygFiutBdALv5b45fe4b error: NA matched_locations: [ { city: PHOENIX countryCode: USA locationId: bef26c03-dc5d-4f16-a3ff-957beea80482 matchRank: 1 merchantName: BIG D FLOORCOVERING SUPPLIES postalCode: 85009-1716 state: AZ streetAddress: 2802 W VIRGINIA AVE } { city: PHOENIX countryCode: USA locationId: ec9b385d-6283-46f4-8c9e-dbbe41e48fcc matchRank: 2 merchantName: BIG D FLOOR COVERING 4 postalCode: 85009 state: AZ streetAddress: 4110 W WASHINGTON ST STE 100 } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } ] numReturnedMatches: 10 } logger: c.m.c.d.MatchesApiDelegateImpl } origin: rep source_instance: 1 source_type: APP/PROC/WEB timestamp: 1669261139716063000 } ########################################################### Can anyone help me with how I can expand this field efficiently? Thank you.