All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, @aasserhifni , surely there's a misunderstanding: a SH can be managed by a Deployer only in a SHCluster, a Deployer cannot manage a stand-alone SH. Probably you mean a Deployment Server, that's... See more...
Hi, @aasserhifni , surely there's a misunderstanding: a SH can be managed by a Deployer only in a SHCluster, a Deployer cannot manage a stand-alone SH. Probably you mean a Deployment Server, that's one of the checks I hinted. If your SH is managed by a Deployment Server, you have only to remove the App from the ServerClass where the SH is present. Ciao. Giuseppe
No. It's either a stand-alone search head or it's managed by deployer. Let me point out again that Deployer is not the same as Deployment Server.
I am having some dashboards created by Splunk Dashboard Studio. Anyone know where I could set static color based on values in the dashboard? Thanks much!
@gcusello @PickleRick @ITWhisperer  Can you kindly help to check and update on the same.
Hi, @gcusello . Sorry for my misunderstanding. The search head is managed by the deployer but the app was installed on the search head only and we just upgraded the splunk version.  
This sounds like and LB issue and not Splunk. As to why your F5 is not switiching it might be due to the continuous stream of syslog data being sent, so therefore you will need check your F5 LB co... See more...
This sounds like and LB issue and not Splunk. As to why your F5 is not switiching it might be due to the continuous stream of syslog data being sent, so therefore you will need check your F5 LB conifg options such as round-robin/least connections etc, and ensure its configured for Layer 4 routing and test it out. When using Splunk instances such as HF's as syslog receiver's its generally for testing and non-production enviroments. Why, because if you restart the HF you will loose data for UDP sources,  syslog is Fire and forget and Syslog as a protocol is not ideal for load balancing, so if you can live with the fact you can lose data then so be it. Other issues you can get are, data imbalance on the indexers,data not being parsing correctly as the TA's need reconfiguring to handle sourcetype / parsing when sending syslog to Splunk receiver ports. The best practise for Splunk production enviroments and syslog data are Splunk SC4S and if HA is required then look at KeepaliveD(Layer 4) or Vmotion for HA. SC4S can handle the data and apply metadata for parsing and many other features to effectivly handle common syslog data. LB and HA are two different concepts.
I have found solution. for PHP Agent regex needs to wrapped in # sings. After i used my regex as below it worked #(?i).*\.(jpeg|jpg|png|gif|jpeg|pdf|txt|js|html|tff|css|svg|png|pdf|dll|yml|yaml|ico|... See more...
I have found solution. for PHP Agent regex needs to wrapped in # sings. After i used my regex as below it worked #(?i).*\.(jpeg|jpg|png|gif|jpeg|pdf|txt|js|html|tff|css|svg|png|pdf|dll|yml|yaml|ico|env|gz|bak)$#
Here's an example, you can then change to your SPL fields | makeresults | eval millis_sec = 5000 | eval seconds = millis_sec/1000 | table millis_sec, seconds
Hi @aasserhifni , infact the question of @PickleRick is the same I did some answers ago: have you a Clustered SH or a stand-alone SH? if a stand-alone SH, have you some update tools (as Ansible or... See more...
Hi @aasserhifni , infact the question of @PickleRick is the same I did some answers ago: have you a Clustered SH or a stand-alone SH? if a stand-alone SH, have you some update tools (as Ansible or GPO) or is your SH managed by a Deployment Server? Ciao. Giuseppe
Hello @kate , Below are two things that you can check: 1) index=_internal host=<<ubuntu_hf>>  ---> Check if there are any events or not. Even if there are few events, it means that connectivity is ... See more...
Hello @kate , Below are two things that you can check: 1) index=_internal host=<<ubuntu_hf>>  ---> Check if there are any events or not. Even if there are few events, it means that connectivity is established. 2) Did you restart splunk after installation of Splunk Cloud UF credentials package? If the above two approaches do not help, check for splunkd.log on the ubuntu UF instance itself. It should point out to why it is failing to send the logs to SplunkCloud. Thanks, Tejas. --- If the above solution is helpful, an upvote is appreciated.
  I am using like this.But its not mapping <input type="dropdown" token="interface" searchWhenChanged="true" depends="$BankDropDown$"> <label>InterfaceName</label> <choice value="*... See more...
  I am using like this.But its not mapping <input type="dropdown" token="interface" searchWhenChanged="true" depends="$BankDropDown$"> <label>InterfaceName</label> <choice value="*">All</choice> <search> <query> | inputlookup BankIntegration.csv | search $new_value$ | eval InterfaceName=split(InterfaceName,",") | stats count by InterfaceName | table InterfaceName </query> </search> <fieldForLabel>InterfaceName</fieldForLabel> <fieldForValue>InterfaceName</fieldForValue> <default>*</default> <prefix>InterfaceName="</prefix> <suffix>"</suffix> <change> <condition match="$value$==&quot;*&quot;"> <set token="new_interface">InterfaceName IN ( "USBANK_KYRIBA_ORACLE_CE_BANKSTMTS_INOUT", "USBANK_AP_POSITIVE_PAY", "HSBC_NA_AP_ACH", "USBANK_AP_ACH", "HSBC_EU_KYRIBA_CE_BANKSTMTS_TWIST_INOUT")</set> </condition> <condition> <set token="new_interface">$interface$</set> </condition> </change> </input>  
Hello @Isaac_Hailperin , Can you share what steps have you taken so far? That would help understand what is actually missing. Thanks, Tejas.
Hi Team  How to convert millsec value to seconds  index=testing | timechart max("event.Properties.duration") Can anyone helps to with spl query search converting value  millsec value to seconds... See more...
Hi Team  How to convert millsec value to seconds  index=testing | timechart max("event.Properties.duration") Can anyone helps to with spl query search converting value  millsec value to seconds       
@karthi2809  Try using `new_value` as a filter in the Interface Drop down. 
Hello, It seems that in the dashboard studio the static choropleth map has no legend. Here is the spl: index=xxxxxxxx sourcetype=yyyyyy mailgate* src=* | iplocation src | stats count by Country |... See more...
Hello, It seems that in the dashboard studio the static choropleth map has no legend. Here is the spl: index=xxxxxxxx sourcetype=yyyyyy mailgate* src=* | iplocation src | stats count by Country | geom geo_countries allFeatures=True featureIdField=Country If I put this map in a classic dashboard I get the map with the legend but in the dashboard studio no legend is showed. Is it a way to show this legend in the dashboard studio? Regards, Emile
@gcusello , I have shared the sample events as well for all the 3 queries and for each Step field i want to get the Success and Failure information so kindly help to achieve the same.   The query ... See more...
@gcusello , I have shared the sample events as well for all the 3 queries and for each Step field i want to get the Success and Failure information so kindly help to achieve the same.   The query you have provided pulls the total count of success and failure but i need a split of each "Step" field and their corresponding "Success" and "Failure" information. So kindly help to check and update on the same.
@ITWhisperer @PickleRick , Thank you for your response. Here are my updates as requested. ================================================================== Query1: index="abc" ("Restart transac... See more...
@ITWhisperer @PickleRick , Thank you for your response. Here are my updates as requested. ================================================================== Query1: index="abc" ("Restart transaction item" NOT "Pending : transaction item:") | rex field=_raw "Restart transaction item: (?<Step>.*?) \(WorkId:"| table Step |stats Count by Step Sample Events: 2024-04-21 03:00:02.6106|INFO|Transaction.Overflow.card.Command.Control|Restart transaction item: Validation (WorkId: 1234567) for RUNTIME: 987654| 2024-04-21 02:00:03.5437|INFO|Transaction.Overflow.card.Command.Control|Restart transaction item: Creation (WorkId: 1234567) for RUNTIME: 987654| 2024-04-18 09:00:10.9426|INFO|Transaction.Overflow.card.Command.Control|Restart transaction item: Compliance Portal Report (WorkId: 1234567) for RUNTIME: 987654| Output in Table Format: Step                                                    Count Validation                                              1 Creation                                                1 Compliance Portal Report            1 Query 2: index="abc" ("Error restart workflow item:") | rex field=_raw "Error restart workflow item: (?<Success>.*?) \(WorkId:"| table Success |stats Count by Success For the 1st and 2nd event it contains 30+ Lines of sample event hence I have took a small portion of it. While the 3rd and 4th event contains 10+ Lines and i have extracted a small amount of data. Sample Events: 2024-04-14 02:00:07.8759|ERROR|Transaction.Overflow.card.Command.Control|Error restart workflow item: Validation (WorkId: 1234567) for RUNTIME: 987654|System.Info.Entra.Solution.UpdateExecution: An error occurred while updating the entries. See the inner exception for details. ---> System.Data.Entity.Core.UpdateException: An error occurred while updating the entries. See the inner exception for details. ---> System.Data.SqlClient.SqlException: Transaction (Process ID 12) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. 2024-03-26 15:00:05.9123|ERROR|Transaction.Overflow.card.Command.Control|Error restart workflow item: Validation (WorkId: 1234567) for RUNTIME: 987654|System.Data.Entity.Infrastructure.DbUpdateException: An error occurred while updating the entries. See the inner exception for details. ---> System.Data.Entity.Core.UpdateException: An error occurred while updating the entries. See the inner exception for details. ---> System.Data.SqlClient.SqlException: Transaction (Process ID 12) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. 2024-03-27 03:00:15.3116|ERROR|Transaction.Overflow.card.Command.Control|Error restart workflow item: Creation (WorkId: 1234567) for RUNTIME: 987654|System.NullReferenceException: Object reference not set to an instance of an object. 2024-03-27 01:00:16.3231|ERROR|Transaction.Overflow.card.Command.Control|Error restart workflow item: Compliance Portal Report (WorkId: 1234567) for RUNTIME: 987654|System.NullReferenceException: Object reference not set to an instance of an object. Output in Table Format: Success                                         Count Validation                                           2 Creation                                             1 Compliance Portal Report         1   Query 3: index="abc" "Restart Pending event from command," | rex field=_raw "Restart Pending event from command, (?<Failure>.*?) \Workid"| table Failure |stats Count by Failure =============================================================================================================================================================================================== Sample Events: 2024-04-21 03:01:14.7929|INFO|Transaction.Overflow.card.Command.ValidationCommand|Pending: Restart Pending event from command, Validation Workid (WorkId: 1234567) for RUNTIME: 987654.| 2024-04-18 09:00:11.8332|INFO|Transaction.Overflow.card.Command.CreationCommand|Pending: Restart Pending event from command, Creation Workid (WorkId: 1234567) for RUNTIME: 987654.| 2024-04-17 06:51:16.7544|INFO|Transaction.Overflow.card.Command.CompliancePortalReportCommand|Pending: Restart Pending event from command, Compliance Portal Report Workid (WorkId: 1234567) for RUNTIME: 987654.| 2024-04-16 13:00:34.6238|INFO|Transaction.Overflow.card.Command.PageCountsCommand|Pending: Restart Pending event from command, Page Counts Workid (WorkId: 1234567) for RUNTIME: 987654.| Output in Table Format: Failure                                               Count Validation                                             1 Creation                                                1 Compliance Portal Report            1 Page Counts                                       1  So I need to combine all the 3 queries, i.e. Example For Step "Validation" i need to get how many Step are present for last 24 hours and in which how many success and how many failure.   Hence kindly help to check and update on the same please.
Hello, @PickleRick . Sorry for my late response. You're right In our case, it's a standalone search head
we used a  F5 load balancer in front of 2 Intermediate Forwarders,  to receive syslog messages. the issue of the load balancer as all logs are forwarded to one IF and the other is empty. We need to... See more...
we used a  F5 load balancer in front of 2 Intermediate Forwarders,  to receive syslog messages. the issue of the load balancer as all logs are forwarded to one IF and the other is empty. We need to balance the load between them. where can I investigate this issue?
From what your saying, something seems then to be overiding it, if its still taking the old setting, which could be another app. Can you show me the output of this command on the UF NOT deployment s... See more...
From what your saying, something seems then to be overiding it, if its still taking the old setting, which could be another app. Can you show me the output of this command on the UF NOT deployment server? (Obviously remove your hostname and ip for security reasons) /opt/splunkforwarder/bin/splunk btool deploymentclient list --debug Can you also check the log on the UF  (It may help further as to why - should show connection failures at this stage) cat /opt/splunkforwarder/var/log/splunk/splunkd.log | grep DC:DeploymentClient Can you confirm the UF can communicate to port 8089 which is the Deployment Server (telnet to it if you can ) temporarly disable the firewall if you can.  Check the Deployment Server ports run the below on the Deployment Server netstat -tuplna