All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

HI @emilep, what's the resul without transpose? did you read the command description at https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Transpose ? in addition, there's this use... See more...
HI @emilep, what's the resul without transpose? did you read the command description at https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Transpose ? in addition, there's this useful link https://www.splunk.com/en_us/blog/customers/splunk-clara-fication-transpose-xyseries-untable-and-more.html#:~:text=Right%20out%20of%20the%20gate,order%20to%20improve%20your%20visualizations.  Ciao. Giuseppe
Hi, I have a query like: index=federated:ccs_rmail sourcetype="rmail:KIC:reports" | dedup _time | timechart span=1mon sum(cisco_*) as cisco_* | addtotals | eval rep_perc = round(cisco_stoppedbyre... See more...
Hi, I have a query like: index=federated:ccs_rmail sourcetype="rmail:KIC:reports" | dedup _time | timechart span=1mon sum(cisco_*) as cisco_* | addtotals | eval rep_perc = round(cisco_stoppedbyreputation/Total*100,2), spam_perc =round(cisco_spam/Total*100,2), virus_perc=round(cisco_virus/Total*100,6) | table cisco_stoppedbyreputation,rep_perc,cisco_spam,spam_perc,cisco_virus,virus_perc | rename cisco_spam as spam, cisco_virus as virus,cisco_stoppedbyreputation as reputation | transpose The result look like: column row 1 reputation 740284221 rep_perc 82.46 spam 9695175 spam_perc 1.08 virus 700 virus_perc 0.000078 Is it possible to have something like this? Name # % reputation 740284221 82.46 spam 9695175 1.08 virus 700 0.000078 Thanks, Emile
Hi I'm seeing an error message in my es search head, How we can sort out this issue Search peer idx-xxx.com has the following message: The metric event is not properly structured, source=nmon_perfda... See more...
Hi I'm seeing an error message in my es search head, How we can sort out this issue Search peer idx-xxx.com has the following message: The metric event is not properly structured, source=nmon_perfdata_metrics, sourcetype=nmon_metrics_csv, host=xyz, index=unix-metrics. Metric event data without a metric name and properly formated numerical values are invalid and cannot be indexed. Ensure the input metric data is not malformed, have one or more keys of the form "metric_name:<metric>" (e.g..."metric_name:cpu.idle") with corresponding floating point values. Thanks
Can anyone help me regarding creation of alerts for continuous errors
Hi as @bowesmana said you have set srchIndexesDefault srchIndexesDefault = <semicolon-separated list> * A list of indexes to search when no index is specified. * These indexes can be wild-carded ("... See more...
Hi as @bowesmana said you have set srchIndexesDefault srchIndexesDefault = <semicolon-separated list> * A list of indexes to search when no index is specified. * These indexes can be wild-carded ("*"), with the exception that "*" does not match internal indexes. * To match internal indexes, start with an underscore ("_"). All internal indexes are represented by "_*". * The wildcard character "*" is limited to match either all the non-internal indexes or all the internal indexes, but not both at once. * No default. Personally I always suggest that this should never set anything else than empty/null value. In long run it generates more issues for your users as they don't learn to use index=xyz if there are some indexes set here. Also when this is set by role they have totally different combination of default indexes based on which roles has granted to them. If you set this as *, then it generate performance issues quite easily if/when you have tens/hundreds of indexes. r. Ismo  
I have stopped and restarted the services (Splunk forwarders) on DCs and it fix the issue
To access Splunk Cloud after logging its asking the Splunk Tenant Name could you specify what should I need to enter to get access. Thankyou
Ok, I've had a similar case but are you sure your events aren't getting sent to downstream? In my case they were and indeed duplication did occur. Tl&dr - open a case with support. You have two sep... See more...
Ok, I've had a similar case but are you sure your events aren't getting sent to downstream? In my case they were and indeed duplication did occur. Tl&dr - open a case with support. You have two separate things here. One is a connection close. Unfortunately I didn't have time to dig too deply into it with the customer but it looks like a support ticket material. As fat as I remember from looking at the network traffic, it was indeed the receiving side which suddenly was sending RSTs which was totally unexpected. The other thing is that you probably have useAck enabled in your environment so as the UF tries to re-send the chunk of data it had in buffer when the connection was closed, it gets signaled that the downstream HF had already seen those because apparently closing the connection doesn't prevent the HF from processing the events further.
Hi @noobSpl888, there are three possible issues: the connection between UF and HF isn't open, maybe there's a firewall between them, check using telnet if it's open; you didn't enabled receiving ... See more...
Hi @noobSpl888, there are three possible issues: the connection between UF and HF isn't open, maybe there's a firewall between them, check using telnet if it's open; you didn't enabled receiving on the HF, go in [Settings > Forwarding and Receiving > Receiving] and enable Receiving; you didn't point the correct address, how do you configured your outpts.conf? Ciao. Giuseppe
Your default indexes to search is probably set to a specific index/indexes, so unless you specify the index you will not find results. Note that it is always a good idea to make your searches as spe... See more...
Your default indexes to search is probably set to a specific index/indexes, so unless you specify the index you will not find results. Note that it is always a good idea to make your searches as specific as possible so that your search does not hog resources on the servers. It is always a good idea to specify an index and sourcetype in your searches and then if you need to search wider, then increase the scope.  
I had provided the Read access for test user.
Hi, From the context menu of a "username" field value I choose "new search", then the below SPL was automatically added into the search bar and returned 0 events. * user="aaa" However if I changed... See more...
Hi, From the context menu of a "username" field value I choose "new search", then the below SPL was automatically added into the search bar and returned 0 events. * user="aaa" However if I changed the SPL to index=* user="aaa" then it showed events related to that user. Why * user="aaa" did not work?  
Hi, ii had recently install UF v9.0.5 on our windows hosts to send logs to a heavy forwarder, and is getting below messages in the splunkd logs in windows host. Can i know what are these info about... See more...
Hi, ii had recently install UF v9.0.5 on our windows hosts to send logs to a heavy forwarder, and is getting below messages in the splunkd logs in windows host. Can i know what are these info about? ERROR TcpOutputFd [2404 TcpOutEloop] - Read error. An existing connection was forcibly closed by remote host INFO AutoLoadBalancedConnectionStrategy [2404 TcpOutEloop] - Connection to 10.xx.xx.xx:9997 closed. Read error. An existing connection was forcibly closed by remote host WARN AutoLoadBalancedConnectionStrategy [2404 TcpOutEloop] - Possibe duplication of events with channel=source::C:\Programs Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log|host::xxxxx011|splunkd|2606, streamId=0, offset=0 on host=10.xx.xx.xx:9997 Thanks
Gonna maybe revive this thread. We are using RHEL 8.6 and we have Splunk Enterprise running and configured to listen on port 9997, we added it to the firewall with firewall-cmd and still netstat -l |... See more...
Gonna maybe revive this thread. We are using RHEL 8.6 and we have Splunk Enterprise running and configured to listen on port 9997, we added it to the firewall with firewall-cmd and still netstat -l | grep 9997 returns nothing. We have tried different variations of netstat they all return zero. Also systemctl status splunk.service doesn't show the service using port 9997. Any suggestion do we need to add 9997 to the service somehow? If so how. Have set Splunk up on other RHEL 8 servers before no problem but something about this one seems different. Also the inputs.conf shows [splunktcp:\\9997] disabled=0. Any help is appreciated.
Hi, Connection metrics are logged by splunkd to metrics.log. To search metrics.log directly replace ... in the following search with a space-delimited list of your expected egress addresses: index=... See more...
Hi, Connection metrics are logged by splunkd to metrics.log. To search metrics.log directly replace ... in the following search with a space-delimited list of your expected egress addresses: index=_internal source=*metrics.log* host=idx-i-* group=tcpin_connections sourceIp IN (...) The same data is also logged to the _metrics metrics index: | mstats avg(spl.mlog.tcpin_connections._tcp_KBps) as KBps where index=_metrics group=tcpin_connections sourceIp IN (...) by sourceIp You can use the search/jobs endpoint to run an asynchronous or blocking request to execute one of the search above. See https://docs.splunk.com/Documentation/SplunkCloud/9.0.2305/RESTREF/RESTsearch#search.2Fjobs for more information.
Hi, Ingest actions may be the simplest solution. For each source type, e.g. kube:container:container1, create an ingest action with a "Set Index" rule and set the value to the target index. If... See more...
Hi, Ingest actions may be the simplest solution. For each source type, e.g. kube:container:container1, create an ingest action with a "Set Index" rule and set the value to the target index. If you need to route events with the same source type to different indexes, you can add a regular expression or eval-based condition to match content within the events and chain together multiple Set Index rules. More information is available at https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/DataIngest#Set_index.
Hi Kelly, The following error is normal when no proxy is enabled or no proxy credentials are saved in TA-Zscaler_CIM: PersistentScript - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-Zsca... See more...
Hi Kelly, The following error is normal when no proxy is enabled or no proxy credentials are saved in TA-Zscaler_CIM: PersistentScript - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-Zscaler_CIM/bin/TA_Zscaler_CIM_rh_settings.py persistent}: solnlib.credentials.CredentialNotExistException: Failed to get password of realm=__REST_CREDENTIAL__#TA-Zscaler_CIM#configs/conf-ta_zscaler_cim_settings, user=proxy. The error is likely normal in TA-sailpoint_identitynow-auditevent-add-on and TA-trendmicrocloudappsecurity for the same reason. The read timeout error in TA-trendmicrocloudappsecurity is caused by the Trend Micro /v1/siem/security_events endpoint not returning an HTTP response within 5 minutes, the default read timeout inherited by TA-trendmicrocloudappsecurity when it calls the Splunk Add-on Builder helper.send_http_request() method with timeout=None. The timeout value is not configurable, but TA-trendmicrocloudappsecurity/bin/input_module_tmcas_detection_logs.py could be modified to use a longer timeout value: response = helper.send_http_request(     url,     "GET",     parameters=params,     payload=None,     headers=headers,     cookies=None,     verify=True,     cert=None,     timeout=(None, 60),     use_proxy=use_proxy, ) However, this change should be made by Trend Micro, preferably by making the connect and read timeout values fully configurable. Explosions in splunkd.log events can often be caused by failures in modular or scripted inputs, where a script logs a message before a process fails, Splunk immediately restarts the process, and the cycle repeats ad infinitum. Your screenshots don't necessarily point to that, but you may get closer to a cause with: index=_internal source=*splunkd.log* host=*splunkdcloud* | cluster showcount=t | sort 10 - cluster_count | table cluster_count _raw If you don't see anything with a cluster_count of the expected magnitude, remove host=*splunkdcloud* from the search. Change the sort limit from 10 to 0 to show all results.
Hi Alex, Yes this issue was resolved for us with the 9.1.1 release (we originally tested with Splunk a 9.0.6 debug build that also had the fix, so 9.0.6 should also be fine). We are no longer experi... See more...
Hi Alex, Yes this issue was resolved for us with the 9.1.1 release (we originally tested with Splunk a 9.0.6 debug build that also had the fix, so 9.0.6 should also be fine). We are no longer experiencing the issue. Peter
Hypothetically, Example:Isolation:Url would have some other configuration extracting jsessionid, access_token, id_token, or password, possibly through another props stanza, e.g. [host::...] or [sourc... See more...
Hypothetically, Example:Isolation:Url would have some other configuration extracting jsessionid, access_token, id_token, or password, possibly through another props stanza, e.g. [host::...] or [source::...], matching the input.
@KR1  Please can you show the <input> block for your multiselect. You need to have a suitable <change> block to be able to set/unset the nf/sf tokens correctly.