All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

WHen I have uncommented the line SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no. I get the below logs when I run the command journalctl -b -u sc4s Apr 18 13:53:40 ip-MachineIP systemd[1]: Starting SC4... See more...
WHen I have uncommented the line SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no. I get the below logs when I run the command journalctl -b -u sc4s Apr 18 13:53:40 ip-MachineIP systemd[1]: Starting SC4S Container... Apr 18 13:53:41 ip-MachineIP docker[12242]: latest: Pulling from splunk/splunk-connect-for-syslog/container3 Apr 18 13:53:41 ip-MachineIP docker[12242]: Digest: sha256:f8ff916d9cb6836cb0b03b578f51a3777c7a4c84e580fdad9b768cdc7ef2910e Apr 18 13:53:41 ip-MachineIP docker[12242]: Status: Image is up to date for ghcr.io/splunk/splunk-connect-for-syslog/container3:latest Apr 18 13:53:41 ip-MachineIP docker[12242]: ghcr.io/splunk/splunk-connect-for-syslog/container3:latest Apr 18 13:53:41 ip-MachineIP systemd[1]: Started SC4S Container. Apr 18 13:53:42 ip-MachineIP docker[12254]: SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:fallback... Apr 18 13:53:43 ip-MachineIP docker[12254]: SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events... Apr 18 13:53:47 ip-MachineIP docker[12254]: syslog-ng checking config Apr 18 13:53:47 ip-MachineIP docker[12254]: sc4s version=3.22.3 Apr 18 13:53:48 ip-MachineIP docker[12254]: starting goss Apr 18 13:53:50 ip-MachineIP docker[12254]: starting syslog-ng I have created all the indexes mentioned in the document (https://splunk.github.io/splunk-connect-for-syslog/main/gettingstarted/getting-started-splunk-setup/) I cannot find the file /opt/sc4s/local/context/splunk_index.csv I am able to curl and send message to splunk using -k flag in my curl command. Do I need to whitelist if I am able to curl?
Run this command to see if you have poor data ingestion balance across the indexers | tstats count where index=* by index splunk_server | stats sum(count) as total dc(splunk_server) as dc_splunk_ser... See more...
Run this command to see if you have poor data ingestion balance across the indexers | tstats count where index=* by index splunk_server | stats sum(count) as total dc(splunk_server) as dc_splunk_server by index  The dc_splunk_server field will show you how many indexers contain the data for a particular index. If you sort by count, check if the largest data counts are across all indexers. You can also go a bit deeper to check the min/max/avg data count per indexer/index and see if the min or max are outside 3*stdev from average. Also checks if the data is not across all indexers. | tstats count where index=* by index splunk_server | stats avg(count) as avg_count min(count) as min_count max(count) as max_count stdev(count) as stdev_count dc(splunk_server) as dc_splunk_server by index | eventstats max(dc_splunk_server) as total_splunk_servers | where dc_splunk_server < total_splunk_servers OR (min_count < (avg_count - 3*stdev_count)) OR (max_count > (avg_count + 3*stdev_count))  
Hi @inventsekar  Thank you for answer! 1) I don't see any warnings in MC. 2) I see only 1 indexer's bucket count is about 50,000. 9 indexer's count is about 140,000 ~150,000. And each bucket size ... See more...
Hi @inventsekar  Thank you for answer! 1) I don't see any warnings in MC. 2) I see only 1 indexer's bucket count is about 50,000. 9 indexer's count is about 140,000 ~150,000. And each bucket size in 1 indexer is three times bigger than other indexers. So I checked bucket in terminal, i found that tsidx file's sizes are large. 3) Every indexer's conf is same. This trouble continues a few months. Is there anything else to check?
Hi @Bisho-Fouad .. on the DMC / license master.. you can find out the license usage of a specific host.  pls suggest us exactly which step/status you are in..    As you are asking GUI.. the SPL gi... See more...
Hi @Bisho-Fouad .. on the DMC / license master.. you can find out the license usage of a specific host.  pls suggest us exactly which step/status you are in..    As you are asking GUI.. the SPL gives more control actually. 
Hi @dongwonn  Maybe more details pls..  1) on Monitoring Console, do you see any errors / warnings 2) on the indexer clustering, do you see the buckets imbalance? 3) may we know how you say -- on... See more...
Hi @dongwonn  Maybe more details pls..  1) on Monitoring Console, do you see any errors / warnings 2) on the indexer clustering, do you see the buckets imbalance? 3) may we know how you say -- only 1 indexer out of 10 is overused.  4) any recent changes to the indexer cluster, .. any upgrades/migrations, any new apps deployed.. etc.. 
HI, I'm working in splunk team. Environment: 3 SH 10 IDX (1 of 10 IDX overused) Replication factor 3 Search factor 3   Could it happen that searches are continuously done only on certain indexe... See more...
HI, I'm working in splunk team. Environment: 3 SH 10 IDX (1 of 10 IDX overused) Replication factor 3 Search factor 3   Could it happen that searches are continuously done only on certain indexer? I've been constantly monitoring them with top and ps -ef, and I'm seeing a lot of search operations on certain indexer. The cpu usage is roughly double... It's been going on for months. Can it be considered normal?
Here is another page that pretty much shows you how to do this https://docs.splunk.com/Documentation/Splunk/9.2.1/Viz/Buildandeditforms  
I can help - I asked a question about whether you had already added the dropdown field. Have you done so? What have you tried before - it's pretty straightforward to add a dropdown input and add val... See more...
I can help - I asked a question about whether you had already added the dropdown field. Have you done so? What have you tried before - it's pretty straightforward to add a dropdown input and add values to the dashboard - you don't need to write XML The XML reference manual is here https://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML This is a really good app you can install to a Splunk environment that shows many techniques to create powerful dashboards https://splunkbase.splunk.com/app/1603  
Judging from the SPL, where you have two searches for FRUSTRATED, it seems like you have data where multiple userExperienceScores can exist for the same event, hence all the mvexpanding out. As @Pic... See more...
Judging from the SPL, where you have two searches for FRUSTRATED, it seems like you have data where multiple userExperienceScores can exist for the same event, hence all the mvexpanding out. As @PickleRick points out, it's quite tricky to deal with multivalue fields, particularly when you have 6 MV fields that you are zipping up into 2 pairs (x and y). I assume you are using 2 pairs as there is not a 1:1 correlation between the MV's in each of the pairs. What I would suggest is to find a reasonably complex SINGLE (or two) event where you can exhibit the problem. This will make it much easier to diagnose the issue. Then we can help explain what is going on. If you are able to share an example of the raw data (sanitised and preferably NOT a screen image - so we can produce a working example of a solution) that would be good.  
Solution in mycase: Since i was using ArgoCD for deployment, it was overwriting new changes Appd Cluster Agent as part of sync, hence the agents were getting terminating. Also i had to include below ... See more...
Solution in mycase: Since i was using ArgoCD for deployment, it was overwriting new changes Appd Cluster Agent as part of sync, hence the agents were getting terminating. Also i had to include below in my instrumentation rules as my container was running as non-root for appd to work.. runAsUser: 9001 runAsGroup: 9001  
I tried this a few days ago with a 404 - I see the main site is up .. but did the tool get taken down? 
We don't know your raw data but the main question is why do you go to all this trouble of mvzipping and joining all those values into a multivalued field when next you want to do is mvexpand. Why no... See more...
We don't know your raw data but the main question is why do you go to all this trouble of mvzipping and joining all those values into a multivalued field when next you want to do is mvexpand. Why not just filter on raw data in the initial search?
@ITWhisperer Thank you so much, it really saved my time.
| eval Management=if(Applications="OCC", "Out", "In")
Hi @selvam_sekar , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
I would like to add a column called Management  to my table. The management value is not part of the event data. It is  something I would like to assign based on the value of Applications:  Any help... See more...
I would like to add a column called Management  to my table. The management value is not part of the event data. It is  something I would like to assign based on the value of Applications:  Any help would be appreciated. Management Applications In IIT In ALP In MAL In HST Out OCC In ALY In GSS In HHS In ISD  
Hi @Bisho-Fouad  Here's an example search to solve your question...   host=<your host> ``` and whatever else you need to filter your data ```` | eval bytes = length(_raw) ``` generally 1 charac... See more...
Hi @Bisho-Fouad  Here's an example search to solve your question...   host=<your host> ``` and whatever else you need to filter your data ```` | eval bytes = length(_raw) ``` generally 1 character = 1 byte ``` | stats sum(bytes) AS bytes BY source ``` this gives the size of each log, assuming the source is the name of the log file ``` | eval kilobytes = bytes/1024) | evenstats sum(kilobytes) AS total_kb   Hope that helps  
Thanks @yuanliu, I explained why I couldn't use path directly, because it contains actual parameters.  For example, for the route /orders/{orderID}, the path could be: /orders/123456 /orders/21312... See more...
Thanks @yuanliu, I explained why I couldn't use path directly, because it contains actual parameters.  For example, for the route /orders/{orderID}, the path could be: /orders/123456 /orders/213123 /orders/435534 I want to analyze, for example, count of failed requests, or percentiles of call duration on this particular API route /orders/{orderID}.   Of course I can modify my service code to print the route pattern in log, but that is another way, i need to deploy new code to production environment. 
Hi @PickleRick  Not sure If I explained my requirements in a right way. I would like to expand the multiple values present in the each row to separate rows which is having only the "Frustated" relat... See more...
Hi @PickleRick  Not sure If I explained my requirements in a right way. I would like to expand the multiple values present in the each row to separate rows which is having only the "Frustated" related details like the below expected output   Expected output URL Duration  Type Status www.cde.com 88647 Load Frustated www.fge.com 6265 Load Frustated www.abc.com 500 Load Frustated
Hey there , kindly need support how to determine received logs SIZE for specific Host. Prefers to be done through GUI  Hit: working on distributed environment also own License master instance    t... See more...
Hey there , kindly need support how to determine received logs SIZE for specific Host. Prefers to be done through GUI  Hit: working on distributed environment also own License master instance    thanks in advance,