All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

#On your Forwarders Check this to show what the target is? /opt/splunkforwarder/bin/splunk show deploy-poll On your Forwarders Check this to show what the config is? /opt/splunkforwarder/bin/splun... See more...
#On your Forwarders Check this to show what the target is? /opt/splunkforwarder/bin/splunk show deploy-poll On your Forwarders Check this to show what the config is? /opt/splunkforwarder/bin/splunk btool deploymentclient list --debug It might be that the configuration has been set into the below system local config (/opt/splunkforwarder/etc/system/local/deploymentclient.conf) or sometimes its in a custom app (the above Btool should show you this?) If so then change it to the new address (Ensure firewalls and ports are accessable): /opt/splunkforwarder/bin/splunk set deploy-poll <IP_address/hostname>:<management_port> /opt/splunkforwarder/bin/splunk restart  
Hello I have this query :  index="github_runners" sourcetype="testing" source="reports-tests" | spath path=libraryPath output=library | spath path=result.69991058{} output=testResult | mvexpand te... See more...
Hello I have this query :  index="github_runners" sourcetype="testing" source="reports-tests" | spath path=libraryPath output=library | spath path=result.69991058{} output=testResult | mvexpand testResult | spath input=testResult path=fullName output=test_name | spath input=testResult path=success output=test_outcome | spath input=testResult path=skipped output=test_skipped | spath input=testResult path=time output=test_time | table library testResult test_name test_outcome test_skipped test_time | eval status=if(test_outcome="true", "Passed", if(test_outcome="false", "Failed", if(test_skipped="true", "NotExecuted", ""))) | stats count sum(eval(if(status="Passed", 1, 0))) as passed_tests, sum(eval(if(status="Failed", 1, 0))) as failed_tests , sum(eval(if(status="NotExecuted", 1, 0))) as test_skipped by test_name library test_time | eval total_tests = passed_tests + failed_tests | eval success_ratio=round((passed_tests/total_tests)*100,2) | table library, test_name, total_tests, passed_tests, failed_tests, test_skipped, success_ratio test_time | sort + success_ratio And i'm trying to make its dynamic so i will see results for other numbers than '69991058' How can i make it ? i'm trying with regex but it looks like im doing something wrong since im getting 0 results while in the first query there are results  index="github_runners" sourcetype="testing" source="reports-tests" | spath path=libraryPath output=library | rex field=_raw "result\.(?<number>\d+)\{\}" | spath path="result.number{}" output=testResult | mvexpand testResult | spath input=testResult path=fullName output=test_name | spath input=testResult path=success output=test_outcome | spath input=testResult path=skipped output=test_skipped | spath input=testResult path=time output=test_time | table library testResult test_name test_outcome test_skipped test_time | eval status=if(test_outcome="true", "Passed", if(test_outcome="false", "Failed", if(test_skipped="true", "NotExecuted", ""))) | stats count sum(eval(if(status="Passed", 1, 0))) as passed_tests, sum(eval(if(status="Failed", 1, 0))) as failed_tests , sum(eval(if(status="NotExecuted", 1, 0))) as test_skipped by test_name library test_time | eval total_tests = passed_tests + failed_tests | eval success_ratio=round((passed_tests/total_tests)*100,2) | table library, test_name, total_tests, passed_tests, failed_tests, test_skipped, success_ratio test_time | sort + success_ratio
Hi Splunkers, I am working on creating a column line chart dashboard that shows database lattency . I'm encountering a issue where I'm trying to pass a token value to overlay options for line chart ... See more...
Hi Splunkers, I am working on creating a column line chart dashboard that shows database lattency . I'm encountering a issue where I'm trying to pass a token value to overlay options for line chart representation over a column chart. Here are things currently i have, My Chart and My SPL query:   SPL: index=development sourcetype=rwa_custom_function user_action=swmfs_test ds_file=* | eval ds_file_path=ds_path."\\".ds_file | chart avg(ms_per_block) as avg_processing_time_per_block over ds_file_path by machine | appendcols [search index=development sourcetype=rwa_custom_function user_action=swmfs_test ds_file=* | eval ds_file_path=ds_path."\\".ds_file | stats max(block_count) as total_blocks by ds_file_path] I need to assign the overlay field value(avg_processing_time_per_block )from the line in SPL: | chart avg(ms_per_block) as avg_processing_time_per_block over ds_file_path by machine The reason I'm attempting to assign it as a token is that the avg_processing_time_per_block has dynamic values (sometimes it may be 10 or 12 machines data ).instead of rwmini and rwws01. Column has total_blocks value   Or is there any ways to achieve this requirement? Your thoughts on these are highly appreciated. Thank you in advance. Sanjai
Hi. limits.conf on Indexers or simple on SearchHead(s)? Or better both? EDIT: better on Indexers side, since limit =   is for SearchTime from SH to Indexer peer, and 100 is default limit ... See more...
Hi. limits.conf on Indexers or simple on SearchHead(s)? Or better both? EDIT: better on Indexers side, since limit =   is for SearchTime from SH to Indexer peer, and 100 is default limit I also had this "problem" with a ~150 fields JSON, and a simple, [kv] limit = 0 indexed_kv_limit = 0 maxcols = 512 maxchars = 102400 Solved Indexer(s) side Thanks for the trick
Hi All, I have deployed new deployment server  (aws ec2 instance) and updated the existing route53 dns entry to point to this new server. But I see the deployment clients are making connection to ... See more...
Hi All, I have deployed new deployment server  (aws ec2 instance) and updated the existing route53 dns entry to point to this new server. But I see the deployment clients are making connection to old server still. I believe there is  old connection saved at deployment client. Does anyone of you know how to encounter this issue ? Your solution helps me lot please. Regards, PNV
@karthi2809  Please check the below sample XML.  Observe `new_value` token and use in your search.   <form version="1.1" theme="dark"> <label>Application</label> <fieldset submitButton="false"... See more...
@karthi2809  Please check the below sample XML.  Observe `new_value` token and use in your search.   <form version="1.1" theme="dark"> <label>Application</label> <fieldset submitButton="false"> <input type="dropdown" token="BankApp" searchWhenChanged="true"> <label>ApplicationName</label> <choice value="*">All</choice> <search> <query> | makeresults | eval applicationName="Test1,Test2,Test3" | eval applicationName=split(applicationName,",") | stats count by applicationName | table applicationName </query> </search> <fieldForLabel>applicationName</fieldForLabel> <fieldForValue>applicationName</fieldForValue> <default>*</default> <prefix>applicationName="</prefix> <suffix>"</suffix> <change> <condition match="$value$==&quot;*&quot;"> <set token="new_value">applicationName IN ("Test1" OR "TEST2" OR "Test3")</set> </condition> <condition> <set token="new_value">applicationName = $BankApp$</set> </condition> </change> </input> </fieldset> <row> <panel> <html> Dropdown Value = $BankApp$ <br/> new_value= $new_value$ </html> </panel> </row> </form>    I hope this will help you. Thanks KV If any of my replies help you to solve the problem Or gain knowledge, an upvote would be appreciated.
Yes, it was the padding / max cache size that was the culprit. The calculation I did was wrong.   Thank you André
Hi  @mdunnavant , I noticed you were working on passing a token to chart overlay. I'm encountering a similar issue where I'm trying to pass a token value to overlay options for line chart representa... See more...
Hi  @mdunnavant , I noticed you were working on passing a token to chart overlay. I'm encountering a similar issue where I'm trying to pass a token value to overlay options for line chart representation over a column chart. If you've managed to achieve this, could you please share how you made it overlay with a token value? Your insights would be greatly appreciated. My Chart and My SPL query: SPL: index=development sourcetype=rwa_custom_function user_action=swmfs_test ds_file=* | eval ds_file_path=ds_path."\\".ds_file | chart avg(ms_per_block) as avg_processing_time_per_block over ds_file_path by machine | appendcols [search index=development sourcetype=rwa_custom_function user_action=swmfs_test ds_file=* | eval ds_file_path=ds_path."\\".ds_file | stats max(block_count) as total_blocks by ds_file_path]   I need to assign the overlay field value from the line in SPL: | chart avg(ms_per_block) as avg_processing_time_per_block over ds_file_path by machine The reason I'm attempting to assign it as a token is that the avg_processing_time_per_block has dynamic values (sometimes it may be 10 or 12).instead of rwmini and rwws01. Thanks In advance,
Thanks In Advance. I am using dropdown values for my requirement. In the dropdown i am using token and getting the values from inputlookup and i will pass the value to splunk query.There are two dro... See more...
Thanks In Advance. I am using dropdown values for my requirement. In the dropdown i am using token and getting the values from inputlookup and i will pass the value to splunk query.There are two dropdown one is application Name another one interface name.If i select values i am getting result .If select ALL and the values shows *.in the splunk query.Instead of * .I want to gey values like OR conditions.If i the token getting * then it showing all the values.But i want to show the values which is comming from inputlookup values both application name and interface name.     When i am selecting ALL my splunk query like this: index=mulesoft environment=PRD (applicationName="*" OR priority IN ("ERROR", "WARN")) | stats values(*) AS * BY correlationId applicationName | rename content.InterfaceName AS InterfaceName content.FileList{} AS FileList content.Filename as FileName content.ErrorMsg as ErrorMsg | eval Status=case(priority="ERROR","ERROR", priority="WARN","WARN", priority!="ERROR","SUCCESS") | fields Status InterfaceName applicationName FileList FileName correlationId ErrorMsg message | search InterfaceName="*" FileList="*" | sort -timestamp | sort -timestamp I am expecting : index=mulesoft environment=PRD applicationName IN ("Test1" OR "TEST2" OR "Test3") OR priority IN ("ERROR", "WARN") | stats values(*) AS * BY correlationId applicationName | rename content.InterfaceName AS InterfaceName content.FileList{} AS FileList content.Filename as FileName content.ErrorMsg as ErrorMsg | eval Status=case(priority="ERROR","ERROR", priority="WARN","WARN", priority!="ERROR","SUCCESS") | fields Status InterfaceName applicationName FileList FileName correlationId ErrorMsg message | search InterfaceName IN ("aa" OR "bb" OR "cc") AND FileList="*" | sort -timestamp | sort -timestamp DropDown Code </input><input type="dropdown" token="BankApp" searchWhenChanged="true" depends="$BankDropDown$"> <label>ApplicationName</label> <choice value="*">All</choice> <search> <query> | inputlookup BankIntegration.csv | dedup applicationName | sort applicationName | table applicationName </query> </search> <fieldForLabel>applicationName</fieldForLabel> <fieldForValue>applicationName</fieldForValue> <default>*</default> <prefix>applicationName="</prefix> <suffix>"</suffix> </input> <input type="dropdown" token="interface" searchWhenChanged="true" depends="$BankDropDown$"> <label>InterfaceName</label> <choice value="*">All</choice> <search> <query> | inputlookup BankIntegration.csv | search $BankApp$ | sort InterfaceName | table InterfaceName </query> </search> <fieldForLabel>InterfaceName</fieldForLabel> <fieldForValue>InterfaceName</fieldForValue> <default>*</default> <prefix>InterfaceName="</prefix> <suffix>"</suffix> </input>    
@altink - Surely you can do that. Go to classic Splunkbase.   Click on Manage App (You won't see this option if you are not the editor or owner of the App)   Click on any of the ver... See more...
@altink - Surely you can do that. Go to classic Splunkbase.   Click on Manage App (You won't see this option if you are not the editor or owner of the App)   Click on any of the version number for which you would like to update the release notes.   And here, you should be able to update the release notes or any other details.   I hope this is helpful!!! If this helps, kindly upvote and accept the answer.!!
How do you implement this using ansible playbook? I'm also stuck with this process of accepting the license in Splunk. I'm using user-seed.conf but it couldn't access the src path since I'm using git... See more...
How do you implement this using ansible playbook? I'm also stuck with this process of accepting the license in Splunk. I'm using user-seed.conf but it couldn't access the src path since I'm using gitlab as my repository.  - name: Generate Splunk Seed Password   ansible.builtin.set_fact:     splunk_seed_passwd: "{{ 'password' | password_hash('sha512') }}"   register: hashed_pwd   when: splunk_agent_status.rc != 0 - name: Create user-seed.conf file   ansible.builtin.template:     dest: /opt/splunkforwarder/etc/system/local/user-seed.conf     owner: root     group: root     mode: 0640     option: "{{ item.opt }}"     value: "{{ item.val }}"   with_items:   - {opt: 'USERNAME', val: 'admin'}   - {opt: 'HASHED_PASSWORD', val: '{{ hashed_pwd}}'}   become: true   when: splunk_agent_status.rc != 0
Hi @harishlnu  One of the ways is using Rest API - /rest/health of SOAR - status field contains all the daemons health information and additional info on resource utilization. https://docs.splunk.c... See more...
Hi @harishlnu  One of the ways is using Rest API - /rest/health of SOAR - status field contains all the daemons health information and additional info on resource utilization. https://docs.splunk.com/Documentation/SOAR/current/PlatformAPI/RESTInfo#.2Frest.2Fhealth To monitor I would run an external script or if you are using Splunk Enterprise - by using | restsoar command you can call the above Rest API and create an alert.  You should install official  https://splunkbase.splunk.com/app/6361 Splunk App for SOAR to use  | restsoar command. -------- Srikanth Yarlagadda  
This is a message saying that the server you're trying to send your emails with doesn't let you do so (at least not without proper authentication first). It's something you have work with your email ... See more...
This is a message saying that the server you're trying to send your emails with doesn't let you do so (at least not without proper authentication first). It's something you have work with your email server provider (or configure proper settings on your Splunk server).
I am getting the following error :   command="sendemail", (*****SMTP; Client was not authenticated to send anonymous mail during MAIL FROM', '*****.com') while sending mail to: it-security@durr.com
Hi @swaprks, If you're relying on automatic field-extraction, i.e. KV_MODE = auto and AUTO_KV_JSON = true or KV_MODE = json or INDEXED_EXTRACTIONS = JSON, only the nested fields are extracted, e.g.:... See more...
Hi @swaprks, If you're relying on automatic field-extraction, i.e. KV_MODE = auto and AUTO_KV_JSON = true or KV_MODE = json or INDEXED_EXTRACTIONS = JSON, only the nested fields are extracted, e.g.: initiatedBy.user.id targetResources{}.id Arrays are extracted as multi-valued fields, e.g.: targetResources{}.modifiedProperties{}.displayName := AccountEnabled StsRefreshTokensValidFrom UserPrincipalName UserType Included Updated Properties Automatic extraction of arrays of objects with array fields can also be confusing. To return the native JSON directly, extract the fields as part of your search: index=directoryaudit | eval json=json(_raw), initiatedBy=json_extract(json, "initiatedBy"), targetResources=json_extract(json, "targetResources") | fields id activityDisplayName result operationType correlationId initiatedBy resultReason targetResources category loggedByService activityDateTime  
Thank You @gcusello  But I do not own a Splunk system, so I have no Support account. I am just a developer, who has published two Apps at Splunkbase Should I ask via e-mail Splunk Developer in... See more...
Thank You @gcusello  But I do not own a Splunk system, so I have no Support account. I am just a developer, who has published two Apps at Splunkbase Should I ask via e-mail Splunk Developer instead ? best regards Altin
Hi @Mfmahdi , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @aasserhifni , the apps distributed by Deployer are in this folder not in apps and aren't installed on the Deployer. is this app in the $SPUNK_HOME/etc/shcluster/apps of the Deployer? If this a... See more...
Hi @aasserhifni , the apps distributed by Deployer are in this folder not in apps and aren't installed on the Deployer. is this app in the $SPUNK_HOME/etc/shcluster/apps of the Deployer? If this app is in this folder, remove it and push again apps. Ciao. Giuseppe  
Hi you could read more, how to use btool check from https://dev.splunk.com/enterprise/tutorials/module_validate/validateapp/ r. Ismo
As a quick follow-up, the setting is recognized by all currently supported versions of Splunk Enterprise and present at least as far back as Splunk Enterprise 8.1; however, it's not documented.