All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello everyone, I have built a dashboard with dashboard studio but in the panels I have noticed that you can use many properties but you cannot change the position of the markdown text. I have alre... See more...
Hello everyone, I have built a dashboard with dashboard studio but in the panels I have noticed that you can use many properties but you cannot change the position of the markdown text. I have already tried to see the documentation but to no avail (maybe I am missing something). By changing position I also mean simply aligning the panel text left,centre,right inside. Do you have any ideas? Thank you, biwanari
Hi @PickleRick,   That is indeed the set up I have.  That is correct there isnt a issue with connection between the HF and Splunk Cloud but rather my results from the DBconnect app not sending t... See more...
Hi @PickleRick,   That is indeed the set up I have.  That is correct there isnt a issue with connection between the HF and Splunk Cloud but rather my results from the DBconnect app not sending to Splunk Cloud.  I am more so looking to see if anyone else has faced this issue before because I have checked several things and all looks well but no real solution to get the data transferred 
Hi @Strangertinz , ok, let me know if I can help you further. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors  
Here is a actual  problem sample... good to see by outlier with -9.4
Hi @gcusello  I will check again. I also used batched results and still did not see any data. This is why I am not narrowing my focus on the rising column but I will evaluate further and ensure th... See more...
Hi @gcusello  I will check again. I also used batched results and still did not see any data. This is why I am not narrowing my focus on the rising column but I will evaluate further and ensure there are no errors with the rising column.  Thanks!
@yuanliu  Thank you for your help. I accepted your suggestion as solution with the following note: - sendemail didn't work because I wasn't an admin - Using alert worked just fine Can you clari... See more...
@yuanliu  Thank you for your help. I accepted your suggestion as solution with the following note: - sendemail didn't work because I wasn't an admin - Using alert worked just fine Can you clarify what you meant "join" will get me nowhere?   The result using JOIN worked just fine My intention is to "join" the data, not to "append". When I  used APPEND, the data was appended to the original data and I had to use "stats command" to merge the data.   Thanks
So all 3 files are picked up by this one monitor stanza? Are the files all truly the same format aka "sourcetype"? Can you explain a bit more about why we omitting just one file?  What can we us... See more...
So all 3 files are picked up by this one monitor stanza? Are the files all truly the same format aka "sourcetype"? Can you explain a bit more about why we omitting just one file?  What can we use to uniquely identify this particular source? the host? Sounds like it has to be source + something else to make it unique.  If you cant differentiate them at the source, then perhaps something like ingest_eval or a "sourcetype rename" is needed.  Seems to me you might just be overloading the config...I mean maybe just dont deploy an input that picks up this file in prod? thats why i asked if they truly are all the same sourcetype/format....
Hi matty, Thanks for your quick response. the lab and prod file paths are the same - yes , but sourcetype name is different for prod and stage i can't pass sourcetype in props because three log fi... See more...
Hi matty, Thanks for your quick response. the lab and prod file paths are the same - yes , but sourcetype name is different for prod and stage i can't pass sourcetype in props because three log files are part of one sourcetype and among that i am restricting one log file - but i want all three logs in file stage. Also are you on-prem or cloud? - on-Prem What does your inputs.conf stanza look like? [monitor:<path>] sourcetype= <sourcetype name> Thanks
Dont work windows commands nothing happens msiexec.exe /I Splunk.msi SPLUNKUSERNAME=SplunkAdmin SPLUNKPASSWORD=MyNewPassword /quiet  
Do you have a SPL Code hint for me?
This is part of the splunkd health report. It is configured in health.conf Would suggest reviewing if this "forwarder" is sending old files or actually is falling behind or have some clean up nee... See more...
This is part of the splunkd health report. It is configured in health.conf Would suggest reviewing if this "forwarder" is sending old files or actually is falling behind or have some clean up needed on its ingestion tracker values.  
Hi!  If I am following your question, you are concerned because the lab and prod file paths are the same? You are not required to set the source path to the file in props.conf to get the desired ... See more...
Hi!  If I am following your question, you are concerned because the lab and prod file paths are the same? You are not required to set the source path to the file in props.conf to get the desired outcome. If your sourcetype is being set in your inputs that pick up this file, you can simply configure the props to match on the sourcetype to do the processing.  Also I don't think you want to duplicate the stanza names in transforms.conf ie. [setnull] is named twice. could lead to unintended consequences.  What does your inputs.conf stanza look like? How are you sending this file (UF to indexers? Uf to HF to idx?) Also are you on-prem or cloud? I ask because "Ingest Actions" (and other solutions like ingest or edge processor) provides a UI for you to do this to help validate and avoid config mistakes.  Regardless, please always test your configs in a local lab environment to avoid having a bad day
Hello , i have a common log file which same name in both production and stage with different name for sourcetype. As i don't want that logs to be ingested from Production i have added below entry... See more...
Hello , i have a common log file which same name in both production and stage with different name for sourcetype. As i don't want that logs to be ingested from Production i have added below entry in props.conf. [source::<Log file path>] Transforms-null= setnull   transforms.conf [setnull] REGEX = BODY DEST_KEY = queue FORMAT = nullQueue [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue   But i want same log file from stage and not from production - in props.conf adding the sourctype of prod will restrict the logs from production and ingest the logs from stage where sourcetype name is different?? [source::<Log file path>] [sourcetype = <Prod Sourcetype>] Transforms-null= setnull   in Addition - Prod Source Type i have other two logs and i don't want that get stopped because of this configuration changes. Thanks
Thank you repling, Rick. I don't little write English. But I'll Challange. I misstake Splunk version. Not 7.3 but 9.3.1 Why do my splunkd loopbacked? I install splunk-9.3.1-0b8d769cb912-x64-rel... See more...
Thank you repling, Rick. I don't little write English. But I'll Challange. I misstake Splunk version. Not 7.3 but 9.3.1 Why do my splunkd loopbacked? I install splunk-9.3.1-0b8d769cb912-x64-release.msi and I don't change settings perhaps. In this server I hit netstat. the result is next. C:\>netstat -an -p tcp アクティブな接続 プロトコル ローカル アドレス 外部アドレス 状態 TCP 0.0.0.0:80 0.0.0.0:0 LISTENING TCP 0.0.0.0:88 0.0.0.0:0 LISTENING TCP 0.0.0.0:135 0.0.0.0:0 LISTENING TCP 0.0.0.0:389 0.0.0.0:0 LISTENING TCP 0.0.0.0:443 0.0.0.0:0 LISTENING TCP 0.0.0.0:445 0.0.0.0:0 LISTENING TCP 0.0.0.0:464 0.0.0.0:0 LISTENING TCP 0.0.0.0:593 0.0.0.0:0 LISTENING TCP 0.0.0.0:636 0.0.0.0:0 LISTENING TCP 0.0.0.0:3268 0.0.0.0:0 LISTENING TCP 0.0.0.0:3269 0.0.0.0:0 LISTENING TCP 0.0.0.0:4112 0.0.0.0:0 LISTENING TCP 0.0.0.0:4430 0.0.0.0:0 LISTENING TCP 0.0.0.0:4649 0.0.0.0:0 LISTENING TCP 0.0.0.0:5985 0.0.0.0:0 LISTENING TCP 0.0.0.0:8000 0.0.0.0:0 LISTENING TCP 0.0.0.0:8080 0.0.0.0:0 LISTENING TCP 0.0.0.0:8089 0.0.0.0:0 LISTENING TCP 0.0.0.0:8191 0.0.0.0:0 LISTENING TCP 0.0.0.0:9389 0.0.0.0:0 LISTENING TCP 0.0.0.0:47001 0.0.0.0:0 LISTENING TCP 0.0.0.0:49664 0.0.0.0:0 LISTENING TCP 0.0.0.0:49665 0.0.0.0:0 LISTENING TCP 0.0.0.0:49666 0.0.0.0:0 LISTENING TCP 0.0.0.0:49667 0.0.0.0:0 LISTENING TCP 0.0.0.0:49668 0.0.0.0:0 LISTENING TCP 0.0.0.0:49670 0.0.0.0:0 LISTENING TCP 0.0.0.0:49671 0.0.0.0:0 LISTENING TCP 0.0.0.0:49672 0.0.0.0:0 LISTENING TCP 0.0.0.0:49674 0.0.0.0:0 LISTENING TCP 0.0.0.0:49677 0.0.0.0:0 LISTENING TCP 0.0.0.0:49681 0.0.0.0:0 LISTENING TCP 0.0.0.0:49697 0.0.0.0:0 LISTENING TCP 0.0.0.0:51142 0.0.0.0:0 LISTENING TCP 0.0.0.0:62000 0.0.0.0:0 LISTENING TCP 127.0.0.1:53 0.0.0.0:0 LISTENING TCP 127.0.0.1:8000 127.0.0.1:59455 ESTABLISHED TCP 127.0.0.1:8000 127.0.0.1:59484 ESTABLISHED TCP 127.0.0.1:8065 0.0.0.0:0 LISTENING TCP 127.0.0.1:8089 127.0.0.1:60730 ESTABLISHED TCP 127.0.0.1:8089 127.0.0.1:62099 TIME_WAIT TCP 127.0.0.1:8191 127.0.0.1:53438 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53439 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53443 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53448 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53501 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53504 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53506 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53508 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53509 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53510 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53511 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53512 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:58525 ESTABLISHED TCP 127.0.0.1:53422 0.0.0.0:0 LISTENING TCP 127.0.0.1:53422 127.0.0.1:53473 ESTABLISHED TCP 127.0.0.1:53426 0.0.0.0:0 LISTENING TCP 127.0.0.1:53438 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53439 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53443 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53448 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53473 127.0.0.1:53422 ESTABLISHED TCP 127.0.0.1:53501 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53504 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53506 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53508 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53509 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53510 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53511 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53512 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:58525 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:59455 127.0.0.1:8000 ESTABLISHED TCP 127.0.0.1:59484 127.0.0.1:8000 ESTABLISHED TCP 127.0.0.1:60730 127.0.0.1:8089 ESTABLISHED TCP 127.0.0.1:61987 127.0.0.1:8089 TIME_WAIT TCP 192.168.0.8:53 0.0.0.0:0 LISTENING TCP 192.168.0.8:139 0.0.0.0:0 LISTENING TCP 192.168.0.8:445 192.168.0.1:51760 ESTABLISHED TCP 192.168.0.8:445 192.168.0.44:59017 ESTABLISHED TCP 192.168.0.8:4649 192.168.0.44:59008 ESTABLISHED TCP 192.168.0.8:58220 20.198.118.190:443 ESTABLISHED TCP 192.168.0.8:59051 20.194.180.207:443 ESTABLISHED TCP 192.168.0.8:59103 3.216.246.128:443 ESTABLISHED TCP 192.168.0.8:59125 50.16.88.233:443 ESTABLISHED TCP 192.168.0.8:59149 54.228.78.235:443 ESTABLISHED TCP 192.168.0.8:59174 151.101.193.140:443 ESTABLISHED TCP 192.168.0.8:59204 151.101.193.140:443 ESTABLISHED TCP 192.168.0.8:59207 35.186.194.58:443 ESTABLISHED TCP 192.168.0.8:59218 151.101.193.140:443 ESTABLISHED TCP 192.168.0.8:59261 34.149.224.134:443 ESTABLISHED TCP 192.168.0.8:59275 151.101.228.157:443 ESTABLISHED TCP 192.168.0.8:59297 54.228.78.235:443 ESTABLISHED TCP 192.168.0.8:59301 151.101.129.181:443 TIME_WAIT TCP 192.168.0.8:59507 184.72.249.85:443 ESTABLISHED TCP 192.168.0.8:60773 104.26.13.205:443 TIME_WAIT TCP 192.168.0.8:60785 23.50.118.133:443 ESTABLISHED TCP 192.168.0.8:60829 34.107.204.85:443 TIME_WAIT TCP 192.168.0.8:60851 13.225.183.97:443 ESTABLISHED TCP 192.168.0.8:60887 172.66.0.227:443 TIME_WAIT TCP 192.168.0.8:60994 18.154.132.17:443 TIME_WAIT TCP 192.168.0.8:61016 34.66.73.214:443 ESTABLISHED TCP 192.168.0.8:61027 3.226.63.48:443 ESTABLISHED TCP 192.168.0.8:61047 35.186.224.24:443 ESTABLISHED TCP 192.168.0.8:61050 34.117.162.98:443 TIME_WAIT TCP 192.168.0.8:61074 34.111.113.62:443 ESTABLISHED TCP 192.168.0.8:61099 107.178.240.89:443 ESTABLISHED TCP 192.168.0.8:61108 35.244.154.8:443 ESTABLISHED TCP 192.168.0.8:61109 107.178.254.65:443 ESTABLISHED TCP 192.168.0.8:61111 34.98.64.218:443 ESTABLISHED TCP 192.168.0.8:61184 20.198.118.190:443 ESTABLISHED TCP 192.168.0.8:61212 151.101.1.140:443 ESTABLISHED TCP 192.168.0.8:61412 35.163.74.134:443 ESTABLISHED TCP 192.168.0.8:61452 35.163.74.134:443 ESTABLISHED TCP 192.168.0.8:61986 65.9.42.42:443 TIME_WAIT TCP 192.168.0.8:62010 65.9.42.42:443 TIME_WAIT TCP 192.168.0.8:62030 65.9.42.42:443 TIME_WAIT TCP 192.168.0.8:62043 65.9.42.42:443 TIME_WAIT TCP 192.168.0.8:62056 65.9.42.28:443 TIME_WAIT TCP 192.168.0.8:62079 192.168.0.8:443 TIME_WAIT TCP 192.168.0.8:62080 192.168.0.8:62000 TIME_WAIT TCP 192.168.0.8:62082 65.9.42.62:443 TIME_WAIT TCP 192.168.0.8:62098 65.9.42.62:443 TIME_WAIT TCP 192.168.0.8:62103 13.107.21.239:443 ESTABLISHED TCP 192.168.0.8:62104 13.107.21.239:443 ESTABLISHED TCP 192.168.0.8:62117 65.9.42.62:443 TIME_WAIT Why is 80xx ports "ESTABLISHED"? It must appear "LISTENING", don't it? How can I change the status? Tell me please. Thank you.
To send specific notable events from the Enterprise Security Incident Review page for investigation, an add-on called the ServiceNow Security Operations Add-on is available. This add-on allows Splunk... See more...
To send specific notable events from the Enterprise Security Incident Review page for investigation, an add-on called the ServiceNow Security Operations Add-on is available. This add-on allows Splunk ES analysts to create security-related incidents and events in ServiceNow. It features on-demand single ServiceNow event or incident creation from Splunk Event Scheduled Alerts, enabling the creation of both single and multiple ServiceNow events and incidents. For Detailed integrations steps refer The reverse integration between ServiceNow and Splunk for incident management can be achieved using an out-of-the-box method.  If this reply is helpful, karma would be appreciated  .
Hi @Nicolas2203 , ok, good for you, let me know, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Ahhh... the SOURCE_KEY part I missed   Good catch!
Hello, I just checked, and the Microsoft Cloud Services manage checkpoints locally on heavy forwarders. However, there is a configuration in the app that allows you to store checkpoints in a contain... See more...
Hello, I just checked, and the Microsoft Cloud Services manage checkpoints locally on heavy forwarders. However, there is a configuration in the app that allows you to store checkpoints in a container within an Azure storage account. This way, when you need to start log collection on another heavy forwarder, it could facilitate the process. Will configure that and test, I let you know ! Thanks Nico
The IP address keeps changing with the same error. Forwarder Ingestion Latency Cause(s) d'origine : Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 272... See more...
The IP address keeps changing with the same error. Forwarder Ingestion Latency Cause(s) d'origine : Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 272246. Message from D97C3DE9-B0CE-408F-9620-5274BAC12C72:192.168.1.191:50409 How do you solve the problem?