All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm trying to enable SAML SSO for my splunk test instance.  In the "Fully qualified domain name or IP of the load balancer" I given my instance name and I tried to skip the port number  but it taking... See more...
I'm trying to enable SAML SSO for my splunk test instance.  In the "Fully qualified domain name or IP of the load balancer" I given my instance name and I tried to skip the port number  but it taking "8443" automatically. So my splunk acs url becomes https://<my instance name> :8443/saml/acs. After successfult saml authentication, I'm landing on https://<my instance name> :8443/saml/acs then it say" This site can't be reached" Here, I'm not using any load balancer. It's trial version and I'm using for testing purpose. Please suggest me how to fix.
I am on Splunk 8.1 trying to create a dynamic dashboard. I am trying to create a multisearch query, the searches for which will be based on the checkboxes that the user clicks.   <input t... See more...
I am on Splunk 8.1 trying to create a dynamic dashboard. I am trying to create a multisearch query, the searches for which will be based on the checkboxes that the user clicks.   <input type="time" token="field1"> <label>Time</label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> </input> <input type="text" token="userinput1"> <label>User Input 1</label> </input> <input type="text" token="userinput2"> <label>User Input 2</label> </input> <input type="checkbox" token="indexesSelected" searchWhenChanged="true"> <label>Indexes</label> <choice value="[search index=index1 $userinput1$ $userinput2$]">Index 1</choice> <choice value="[search index=index2 $userinput1$ $userinput2$]">Index 2</choice> <default></default> <initialValue></initialValue> <delimiter> </delimiter> <prefix>| multisearch [eval test1="test1"] [eval test2="test2"] </prefix> </input>   The search part looks like this:   <search> <query>$indexesSelected$ | table _time, index, field1, field2, field3, field4 | sort Time </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search>     This works as expected except that the final query looks like this: | multisearch [eval test1="test1"] [eval test2="test2"] [search index=index1 $userinput1$ $userinput2$] [search index=index2 $userinput1$ $userinput2$] How can I make these $userinput1$ and $userinput2$ be converted to their token value from the user inputs in the dashboard and not as literal strings. I have tried to use <change> tags to use eval and set based on the <condition> that the user selects, but eval does not allow token value and replaces with literal strings only. Something like this:   <change> <condition match="like($indexesSelected$,&quot;%index1%&quot;)"> <eval token="finalQuery">replace($indexesSelected$,"index1", "[search index=index1 $userinput1$ $userinput2$]")</eval> </condition> <condition match="like($indexesSelected$,&quot;%index2%&quot;)"> <eval token="finalQuery">replace($indexesSelected$,"index2", "[search index=index2 $userinput1$ $userinput2$]")</eval> </condition> </change>  
Hello , sorry for this another noob question. Is there a way that we can set a search result to a token in js ? For example:  <row> <panel depends="$panel_show$"> <single> <search> <query>... See more...
Hello , sorry for this another noob question. Is there a way that we can set a search result to a token in js ? For example:  <row> <panel depends="$panel_show$"> <single> <search> <query>|makeresults|eval result=1016</query> <done> <set token="mytoken">$result.result$</set> </done> </search> <option name="rangeColors">["0x006d9c","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="rangeValues">[100,200,300]</option> <option name="underLabel">cool number</option> <option name="useColors">1</option> </single> </panel> </row> "<done> <set token="mytoken">$result.result$</set> </done>" is there a way to transfer this to js ?  thank you
It is set to select the host value as the file name. The name of the file that UF was reading will be changed in the middle of the file. Which of the following is your host name? (1) The one... See more...
It is set to select the host value as the file name. The name of the file that UF was reading will be changed in the middle of the file. Which of the following is your host name? (1) The one before the change  (2) The one after the change
Hi, I am trying to integrate AWS ALB logs using sqs based s3.  However, I am getting below error. I used ELB Access Logs decoder and tried with different source types.   2022-02-22 02:57:56,7... See more...
Hi, I am trying to integrate AWS ALB logs using sqs based s3.  However, I am getting below error. I used ELB Access Logs decoder and tried with different source types.   2022-02-22 02:57:56,763 level=ERROR pid=18045 tid=MainThread logger=splunk_ta_aws.modinputs.sqs_based_s3.handler pos=utils.py:wrapper:72 | datainput="symplistaging1_elb" start_time=1645498672 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunksdc/utils.py", line 70, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 668, in run decoder = self.create_file_decoder() File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 572, in create_file_decoder return factory.create(**vars(args)) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/common/decoder.py", line 164, in create return decoder_type(**kwargs) TypeError: 'NoneType' object is not callable   Any ideas or solutions highly appreciated? BR, Gayan
It's always mentioned from doc but I couldn't find anywhere to download it including splunk base.
I have upgraded to Splunk 8.2.3 recently in a test environment. The last step in upgrading our slightly outdated environment is to migrate the KVStore. I don't understand the directions. What system ... See more...
I have upgraded to Splunk 8.2.3 recently in a test environment. The last step in upgrading our slightly outdated environment is to migrate the KVStore. I don't understand the directions. What system are we going into "the server.conf in the $SPLUNK_HOME/etc/system/local/ directory." Is it in the Deployer? Cluster Master? Any of the search heads we have? If that file is edited in one system, how is it being pushed to the search heads?  Also could we leave the variable storageEngineMigration=true in that file or does it have to stay as false when not migrating?  Thank you! @https://docs.splunk.com/Documentation/Splunk/8.2.3/Admin/MigrateKVstore
I'm trying to extract a number that may not always be formatted the same way every time. Examples:     OK: Process matching httpd is using 0% CPU OK: Process matching httpd is using 1.1% CPU... See more...
I'm trying to extract a number that may not always be formatted the same way every time. Examples:     OK: Process matching httpd is using 0% CPU OK: Process matching httpd is using 1.1% CPU OK: Process matching httpd is using 24.1% CPU       It's the "0%" that is tripping me up. This will work for numbers with a decimal but not for a percentage that is just "0".     rex "using\s(?<CPU_util_perc>\d+.\d+)\%"       Any help is greatly appreciated.
I wanted to join services (part of same index) with common field and show chosen fields from both searches.. Index=test service=serv1 Name RecordID Version Index=test service=serv2 State ... See more...
I wanted to join services (part of same index) with common field and show chosen fields from both searches.. Index=test service=serv1 Name RecordID Version Index=test service=serv2 State RecordID Version wants to combine two searches by RecordID from Service2  (meaning to optimize query needs to first take RecordID from Service2 and match with Service1)... and notice fieldname Version is common both services. so hence wants to rename version field in service2 to version2.  And final result is Name Version1 Version2  SQL Query: Select A.Name, A.version, B.version from Service1 A, Service2 B where B.RecordID = A.RecordID  
Hello, I don't understand why a file coming from a windows based UF does not get indexed properly.  By this I mean that some fields contain newlines that are interpreted by Splunk as event delimiter... See more...
Hello, I don't understand why a file coming from a windows based UF does not get indexed properly.  By this I mean that some fields contain newlines that are interpreted by Splunk as event delimiters thus turning a single event into multiple events.  I can index that same file manually or from a folder input on a unix filesystem without issue. I am using Splunk Enterprise 8.0.5 with UF 8.0.5 running on Windows 10 VM.  I am trying to index ServiceNow ticket data contained in an ANSI encoded CSV file. Scenarios that work and do not work: I upload the CSV manually through the "Add Data" wizard specifying the sourcetype and it works I deposit the CSV in a local folder on the same VM Splunk Enterprise is installed, configured with the same sourcetype, and it works I deposit the CSV in a local folder on the same VM the UF is installed, configured with the same sourcetype, and it does not work I have no idea why this last scenario does not work.  Here is a simple diagram outlining the second and third scenarios (the third one doesn't work and is highlighted in red): Here is the file I am trying to index (contains one ticket):     "company","opened_by","number","state","opened_at","short_description","cmdb_ci","description","subcategory","hold_reason","assignment_group","resolved_at","resolved_by","category","u_category","assigned_to","closed_by","priority","sys_updated_by","sys_updated_on","active","business_service","child_incidents","close_notes","close_code","contact_type","sys_created_by","sys_created_on","escalation","incident_state","impact","parent","parent_incident","problem_id","reassignment_count","reopen_count","severity","sys_class_name","urgency","u_steps","closed_at","sys_tags","reopened_by","u_process","u_reference_area","calendar_duration","business_duration" "XXX4-S.A.U.R.O.N","Jane Doe","INC000001","Closed","2021-06-01 08:34:04","Short description","SYS","Dear All, For some reason this data doesn't get indexed incorrectly. There are two LF characters between this line and the previous one: THIS ON THE OTHER HAND HAS A SINGLE LF ABOVE Manual upload works, but input from windows does not... Thank you for your help and assitance. Jack","ERP","","ABC-123-ERB-SWD-AB-XXX","2021-06-10 10:19:14","Jack Frost","SOFTWARE","Query","Raja xxxxxx","Jack Frost","3 - Moderate","XXXXXX@DIDI.IT","2021-06-15 23:00:02","false","AWD Services","0","closure confirmed by XXXXX@fox.COM ","Solved (Permanently)","Self-service","XXXXX@fox.COM","2021-06-01 08:34:04","Normal","Closed","3 - Low","","","","1","0","3 - Low","Incident","1 - High","0","2021-06-15 11:00:11","","","DATA","DATA","783910","201600"     here is the props.conf stanza which has been positioned exclusively on the indexer:     [snow_tickets] LINE_BREAKER = ([\r\n])+ DATETIME_CONFIG = NO_BINARY_CHECK = true MAX_EVENTS = 20000 TRUNCATE = 20000 TIME_FORMAT = %Y-%m-%d %H:%M:%S TZ = Europe/Rome category = Structured pulldown_type = 1 disabled = false INDEXED_EXTRACTIONS = csv KV_MODE = SHOULD_LINEMERGE = false TIMESTAMP_FIELDS = sys_updated_on CHARSET = MS-ANSI BREAK_ONLY_BEFORE_DATE =     here is a screenshot of the data coming from the local folder (works): here is what it looks like when it comes from the windows UF: as you can see, it treats a newline as a new event, and does not seem to recognise the sourcetype. Is there any blatantly obvious thing that I've missed?  Any push in the right direction would be great! Thank you and best regards, Andrew
Hello,  Thank you for taking the time to read/consider my question, it's very much appreciated.  I'm revamping a legacy Splunk deployment for a mid-size company that I work for and have recently de... See more...
Hello,  Thank you for taking the time to read/consider my question, it's very much appreciated.  I'm revamping a legacy Splunk deployment for a mid-size company that I work for and have recently deployed IT essentials work to monitor the health of both Windows and *nix hosts in our environment, this app has many wonderful features and visualizations, even though some/most are locked behind the ITSI paywall.  What I'm wondering (mainly from a security perspective), is if there's equivalent apps that Splunk (or third parties, or even individuals) have developed to visualize network & authentication data that is collected from Windows and Unix endpoints. I know network bandwidth is included within the ITE suite, which is terrific, but doesn't help me identify which processes are linked to remote network connections, or track lateral movement across the network.  Do people usually just develop apps internally that take care of this? If that's the case than that's totally fine and I completely understand admins not wanting to share that outside of their own organization, but I can't help but feel that I'm not the only one in this boat, and there must be others with this conundrum as well. As far as I know this is something that used to be dealt with rather well by the purpose built apps by Splunk for Windows and *nix systems, but now that these are going to be deprecated this year I'd like a long-term solution to this problem.  If these types of visualizations are typically reserved for EDR/EPP apps like Crowdstrike, Cylance, S1, Sophos, etc. I also get that, but I'm not actually sure if these apps all have dashboards that would allow you to filter by host, user, process, etc to identify suspicious remote network connections, or authentication attempts across a wide swath of monitored systems.  Again, I'd like to reiterate my appreciation for you taking the time to consider my question. I'm sure there's a simple solution to this that I just have not thought of or stumbled across in my research, but rather than waste another week or two trying to find what everyone else is doing for this I figured I'd just ask the experts myself.  Thanks again!
I have seen a couple of answers about use of proxies by "curl" where it is stated that it is not possible to specify a specific proxy for each request, but it was not clear to me whether it is possib... See more...
I have seen a couple of answers about use of proxies by "curl" where it is stated that it is not possible to specify a specific proxy for each request, but it was not clear to me whether it is possible to use a proxy configured at the system level. I see in the code that you write "### Doesnt use the HTTP_PROXY or HTTPS_PROXY defined in splunk-launch.conf". But is there a way to get the "requests" call to pick up another proxy config, e.g. in Splunk's server.conf/proxyConfig or in Linux's /etc/environment? In short is there any way to get TA-webtools curl to use a proxy? Thanks
I am trying to get 10 events from Splunk. But it takes more than 40 minutes when UI returns results less than 1 sec   String token = "token"; String host = "splunk.mycompany.com";... See more...
I am trying to get 10 events from Splunk. But it takes more than 40 minutes when UI returns results less than 1 sec   String token = "token"; String host = "splunk.mycompany.com"; Map<String, Object> result = new HashMap<>(); result.put("host", host); result.put("token", token); HttpService.setSslSecurityProtocol(SSLSecurityProtocol.TLSv1_2); Service service = new Service(result); Job job = service.getJobs().create("search index=some_index earliest=-1h |head 10"); while (!job.isReady()) { try { Thread.sleep(500); // 500 ms } catch (Exception e) { // Handle exception here. } } // Read results try { ResultsReader reader = new ResultsReaderXml(job.getEvents()); // Iterate over events and print _raw field reader.forEach(event -> System.out.println(event.get("_raw"))); } catch (Exception e) { // Handle exception here. }   What can be a cause of this? This code is from Splunk java sdk GitHub page. Token, host, etc. are changed from real to stub due to NDA
Hi, We are facing issue that we are unable to forward logs into Splunk via rsyslogd. They are forwarding as shown below. if $syslogfacility-text == "local4" then { action( type="omfwd" ... See more...
Hi, We are facing issue that we are unable to forward logs into Splunk via rsyslogd. They are forwarding as shown below. if $syslogfacility-text == "local4" then { action( type="omfwd" Target="syslog.ad.crop" Port="5514" Protocol="tcp" ## queue.type default Direct queue.type="LinkedList" ## queue.size default 1000 queue.size="100000" queue.filename="local4" ) stop } logs were getting ingested till 8th feb . please help to resolve this issue. Regards, Rahul
I have installed a new Indexer but I am getting the below error  looks like the data is copied  but also I don't see the server in  Indexer Clustering: Master Node 2 questions  1. how to add the n... See more...
I have installed a new Indexer but I am getting the below error  looks like the data is copied  but also I don't see the server in  Indexer Clustering: Master Node 2 questions  1. how to add the new indexer to the list  2. how to resolve the error message    02-21-2022 17:52:56.508 +0200 INFO CMSlave - event=addPeer status=failure shutdown=false request: AddPeerRequest: { _id= active_bundle_id=EE37C1F78B2D04FFE51AD60A72882ADB add_type=Initial-Add base_generation_id=0 batch_serialno=1 batch_size=1 forwarderdata_rcv_port=9997 forwarderdata_use_ssl=0 last_complete_generation_id=0 latest_bundle_id=EE37C1F78B2D04FFE51AD60A72882ADB mgmt_port=8089 name=243C06C6-E196-4E2D-A990-5AD71B271ED5 register_forwarder_address= register_replication_address= register_search_address= replication_port=8080 replication_use_ssl=0 replications= server_name=ilissplidx11 site=default splunk_version=7.3.4 splunkd_build_number=13e97039fb65 status=Up } 02-21-2022 17:52:56.508 +0200 ERROR CMSlave - event=addPeer start over and retry after sleep 100ms reason= addType=Initial-Add Batch SN=1/1 failed. add_peer_network_ms=3 02-21-2022 17:52:56.608 +0200 INFO CMSlave - event=addPeer Batch=1/1 02-21-2022 17:52:56.611 +0200 WARN CMSlave - Failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json master=illinissplnkmaster:8089 rv=0 gotConnectionError=0 gotUnexpectedStatusCode=1 actual_response_code=500 expected_response_code=2xx status_line="Internal Server Error" socket_error="No error" remote_error=Cannot add peer=10.232.208.35 mgmtport=8089 (reason: http client error=No route to host, while trying to reach https://10.232.208.35:8089/services/cluster/config). [ event=addPeer status=retrying AddPeerRequest: { _id= active_bundle_id=EE37C1F78B2D04FFE51AD60A72882ADB add_type=Initial-Add base_generation_id=0 batch_serialno=1 batch_size=1 forwarderdata_rcv_port=9997 forwarderdata_use_ssl=0 last_complete_generation_id=0 latest_bundle_id=EE37C1F78B2D04FFE51AD60A72882ADB mgmt_port=8089 name=243C06C6-E196-4E2D-A990-5AD71B271ED5 register_forwarder_address= register_replication_address= register_search_address= replication_port=8080 replication_use_ssl=0 replications= server_name=ilissplidx11 site=default splunk_version=7.3.4 splunkd_build_number=13e97039fb65 status=Up } ].
  I have a question on the Dev tutorial as I am unable to figure the behavior or is the output expected under the DESCRIPTION: AInterpidPanoramaofaMadScientistAndaBoywhomustRedeemBoyinAMonastery ... See more...
  I have a question on the Dev tutorial as I am unable to figure the behavior or is the output expected under the DESCRIPTION: AInterpidPanoramaofaMadScientistAndaBoywhomustRedeemBoyinAMonastery All the words in “DESCRIPTION” are not delimited a by a white space , is the normal behavior ?   Module 1 of the Splunk>Dev tutorial  https://dev.splunk.com/enterprise/tutorials/module_getstarted/ Set up the sample data bundle To get the Eventgen sample bundle and send it to the devtutorial index, do the following steps: Go to https://github.com/splunk/eventgen/blob/develop/tests/sample_bundle.zip and click Download to download the Eventgen sample data file, sample_bundle.zip, to your computer.       DESCRIPTION AInterpidPanoramaofaMadScientistAndaBoywhomustRedeemBoyinAMonastery    
Hello, I'm currently using two csv to make an report as - index A : file A CSV -index B : file B CSV   I m trying to add an value from raw csv file A into raw csv file B. With my first se... See more...
Hello, I'm currently using two csv to make an report as - index A : file A CSV -index B : file B CSV   I m trying to add an value from raw csv file A into raw csv file B. With my first search form CSV file B I get this values Owner IP Owner A 10.10.10.2 Owner B 10.10.10.3   I would like to add the Owner value from my first search into my second search... Owner -> I would get from file A IP -> same value as ip into file A CVE Risk Owner A 10.10.10.2     Owner B 10.10.10.3       How can get value of Owner from csv file B and add them on my search ?   Regards,    Miguel
I just noticed a difference in license usage when looking  at 30 days license usage. With "no split" I am within license limit by 60GB or so, but with "split by" for example by index, I am way over... See more...
I just noticed a difference in license usage when looking  at 30 days license usage. With "no split" I am within license limit by 60GB or so, but with "split by" for example by index, I am way over our license limit? It differs like 90GB or so in total between "no split" and "split by"? No warnings are shown about license usage, so I think that "no split" shows the correct summary. Anybody has a clue as to why?
I have a dashboard with positive and negative values. I contains the difference with the month before. I want the fonts color green if value is negative and red if positive. I've come to this:   ... See more...
I have a dashboard with positive and negative values. I contains the difference with the month before. I want the fonts color green if value is negative and red if positive. I've come to this:   <format type="color" field="verschil tov vorige maand"> <colorPalette type="expression">if (like(value,"%-%"),"#0E9421","#EC6521")</colorPalette> </format>   but this is changing the background color and I want to change the font color. Any idea if it is possible to change the font-color in a similar way?
403 Forbidden - unable to post questions in Splunk community   .. My data is masked , but still why am I not allowed to post questions.