All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Arty there is a VSCode extension: https://splunk.github.io/vscode-extension-splunk-soar/  This allows you to connect to your instance and download, upload, edit and test apps from VSCode.    -- H... See more...
@Arty there is a VSCode extension: https://splunk.github.io/vscode-extension-splunk-soar/  This allows you to connect to your instance and download, upload, edit and test apps from VSCode.    -- Hope this helps! If so please mark as a solution for others! Happy SOARing! --
Hi Team,   Is it possible to automate the entity creation in Splunk ITSI from CMDB? Currently, we are creating entities manually and adding the required fields and values in order to map the servi... See more...
Hi Team,   Is it possible to automate the entity creation in Splunk ITSI from CMDB? Currently, we are creating entities manually and adding the required fields and values in order to map the service.   Regards, Dayananda
I went to appdynamics download portal but there is nothing like enterprise console and controller product.
| timechart span=1mon sum(cisco_*) as cisco_* | rename cisco_* as * | rename stoppedbyreputation as reputation | untable _time name count | fields - _time | eventstats sum(count) as total | eval perc... See more...
| timechart span=1mon sum(cisco_*) as cisco_* | rename cisco_* as * | rename stoppedbyreputation as reputation | untable _time name count | fields - _time | eventstats sum(count) as total | eval percentage=round(100*count/total,2) | fields - total
Hello, To achieve this, you can iterate through your events, calculate the SHA256 hash for each event, and then construct a new JSON object. The resulting JSON will have SHA256 hashes as keys, each ... See more...
Hello, To achieve this, you can iterate through your events, calculate the SHA256 hash for each event, and then construct a new JSON object. The resulting JSON will have SHA256 hashes as keys, each associated with the original event. Here's an example implementation in Python: import json import hashlib # Your list of events in JSON format events = [ { "key1": "val1", "key2": "val2" }, { "key1": "val1a", "key2": "val2a" }, # Add more events as needed ] # Function to calculate SHA256 hash for a given event def calculate_sha256(event): event_json = json.dumps(event, sort_keys=True) sha256_hash = hashlib.sha256(event_json.encode()).hexdigest() return sha256_hash # Construct the new JSON object with SHA256 hashes as keys new_json = {} for event in events: sha256_key = calculate_sha256(event) new_json[sha256_key] = event # Print the result print(json.dumps(new_json, indent=2)) This script defines a function (calculate_sha256) to calculate the SHA256 hash for a given event and then constructs the new JSON object (new_json) as per your requirements. You can check this :  https://stackoverflow.com/questions/76263284/how-to-convert-event-object-to-json/blue prism certification I hope this will help you.
The result without the transpose looks like: reputation rep_perc spam spam_perc virus virus_perc 740284221 82.46 9695175 1.08 700 0.000078 I would like to include thi... See more...
The result without the transpose looks like: reputation rep_perc spam spam_perc virus virus_perc 740284221 82.46 9695175 1.08 700 0.000078 I would like to include this table in a glass table, but as it is formatted here it taking to much place.
HI @emilep, what's the resul without transpose? did you read the command description at https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Transpose ? in addition, there's this use... See more...
HI @emilep, what's the resul without transpose? did you read the command description at https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Transpose ? in addition, there's this useful link https://www.splunk.com/en_us/blog/customers/splunk-clara-fication-transpose-xyseries-untable-and-more.html#:~:text=Right%20out%20of%20the%20gate,order%20to%20improve%20your%20visualizations.  Ciao. Giuseppe
Hi, I have a query like: index=federated:ccs_rmail sourcetype="rmail:KIC:reports" | dedup _time | timechart span=1mon sum(cisco_*) as cisco_* | addtotals | eval rep_perc = round(cisco_stoppedbyre... See more...
Hi, I have a query like: index=federated:ccs_rmail sourcetype="rmail:KIC:reports" | dedup _time | timechart span=1mon sum(cisco_*) as cisco_* | addtotals | eval rep_perc = round(cisco_stoppedbyreputation/Total*100,2), spam_perc =round(cisco_spam/Total*100,2), virus_perc=round(cisco_virus/Total*100,6) | table cisco_stoppedbyreputation,rep_perc,cisco_spam,spam_perc,cisco_virus,virus_perc | rename cisco_spam as spam, cisco_virus as virus,cisco_stoppedbyreputation as reputation | transpose The result look like: column row 1 reputation 740284221 rep_perc 82.46 spam 9695175 spam_perc 1.08 virus 700 virus_perc 0.000078 Is it possible to have something like this? Name # % reputation 740284221 82.46 spam 9695175 1.08 virus 700 0.000078 Thanks, Emile
Hi I'm seeing an error message in my es search head, How we can sort out this issue Search peer idx-xxx.com has the following message: The metric event is not properly structured, source=nmon_perfda... See more...
Hi I'm seeing an error message in my es search head, How we can sort out this issue Search peer idx-xxx.com has the following message: The metric event is not properly structured, source=nmon_perfdata_metrics, sourcetype=nmon_metrics_csv, host=xyz, index=unix-metrics. Metric event data without a metric name and properly formated numerical values are invalid and cannot be indexed. Ensure the input metric data is not malformed, have one or more keys of the form "metric_name:<metric>" (e.g..."metric_name:cpu.idle") with corresponding floating point values. Thanks
Can anyone help me regarding creation of alerts for continuous errors
Hi as @bowesmana said you have set srchIndexesDefault srchIndexesDefault = <semicolon-separated list> * A list of indexes to search when no index is specified. * These indexes can be wild-carded ("... See more...
Hi as @bowesmana said you have set srchIndexesDefault srchIndexesDefault = <semicolon-separated list> * A list of indexes to search when no index is specified. * These indexes can be wild-carded ("*"), with the exception that "*" does not match internal indexes. * To match internal indexes, start with an underscore ("_"). All internal indexes are represented by "_*". * The wildcard character "*" is limited to match either all the non-internal indexes or all the internal indexes, but not both at once. * No default. Personally I always suggest that this should never set anything else than empty/null value. In long run it generates more issues for your users as they don't learn to use index=xyz if there are some indexes set here. Also when this is set by role they have totally different combination of default indexes based on which roles has granted to them. If you set this as *, then it generate performance issues quite easily if/when you have tens/hundreds of indexes. r. Ismo  
I have stopped and restarted the services (Splunk forwarders) on DCs and it fix the issue
To access Splunk Cloud after logging its asking the Splunk Tenant Name could you specify what should I need to enter to get access. Thankyou
Ok, I've had a similar case but are you sure your events aren't getting sent to downstream? In my case they were and indeed duplication did occur. Tl&dr - open a case with support. You have two sep... See more...
Ok, I've had a similar case but are you sure your events aren't getting sent to downstream? In my case they were and indeed duplication did occur. Tl&dr - open a case with support. You have two separate things here. One is a connection close. Unfortunately I didn't have time to dig too deply into it with the customer but it looks like a support ticket material. As fat as I remember from looking at the network traffic, it was indeed the receiving side which suddenly was sending RSTs which was totally unexpected. The other thing is that you probably have useAck enabled in your environment so as the UF tries to re-send the chunk of data it had in buffer when the connection was closed, it gets signaled that the downstream HF had already seen those because apparently closing the connection doesn't prevent the HF from processing the events further.
Hi @noobSpl888, there are three possible issues: the connection between UF and HF isn't open, maybe there's a firewall between them, check using telnet if it's open; you didn't enabled receiving ... See more...
Hi @noobSpl888, there are three possible issues: the connection between UF and HF isn't open, maybe there's a firewall between them, check using telnet if it's open; you didn't enabled receiving on the HF, go in [Settings > Forwarding and Receiving > Receiving] and enable Receiving; you didn't point the correct address, how do you configured your outpts.conf? Ciao. Giuseppe
Your default indexes to search is probably set to a specific index/indexes, so unless you specify the index you will not find results. Note that it is always a good idea to make your searches as spe... See more...
Your default indexes to search is probably set to a specific index/indexes, so unless you specify the index you will not find results. Note that it is always a good idea to make your searches as specific as possible so that your search does not hog resources on the servers. It is always a good idea to specify an index and sourcetype in your searches and then if you need to search wider, then increase the scope.  
I had provided the Read access for test user.
Hi, From the context menu of a "username" field value I choose "new search", then the below SPL was automatically added into the search bar and returned 0 events. * user="aaa" However if I changed... See more...
Hi, From the context menu of a "username" field value I choose "new search", then the below SPL was automatically added into the search bar and returned 0 events. * user="aaa" However if I changed the SPL to index=* user="aaa" then it showed events related to that user. Why * user="aaa" did not work?  
Hi, ii had recently install UF v9.0.5 on our windows hosts to send logs to a heavy forwarder, and is getting below messages in the splunkd logs in windows host. Can i know what are these info about... See more...
Hi, ii had recently install UF v9.0.5 on our windows hosts to send logs to a heavy forwarder, and is getting below messages in the splunkd logs in windows host. Can i know what are these info about? ERROR TcpOutputFd [2404 TcpOutEloop] - Read error. An existing connection was forcibly closed by remote host INFO AutoLoadBalancedConnectionStrategy [2404 TcpOutEloop] - Connection to 10.xx.xx.xx:9997 closed. Read error. An existing connection was forcibly closed by remote host WARN AutoLoadBalancedConnectionStrategy [2404 TcpOutEloop] - Possibe duplication of events with channel=source::C:\Programs Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log|host::xxxxx011|splunkd|2606, streamId=0, offset=0 on host=10.xx.xx.xx:9997 Thanks
Gonna maybe revive this thread. We are using RHEL 8.6 and we have Splunk Enterprise running and configured to listen on port 9997, we added it to the firewall with firewall-cmd and still netstat -l |... See more...
Gonna maybe revive this thread. We are using RHEL 8.6 and we have Splunk Enterprise running and configured to listen on port 9997, we added it to the firewall with firewall-cmd and still netstat -l | grep 9997 returns nothing. We have tried different variations of netstat they all return zero. Also systemctl status splunk.service doesn't show the service using port 9997. Any suggestion do we need to add 9997 to the service somehow? If so how. Have set Splunk up on other RHEL 8 servers before no problem but something about this one seems different. Also the inputs.conf shows [splunktcp:\\9997] disabled=0. Any help is appreciated.