All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Many thanks @gcusello, I don't have the above counters in the input file so I will add them in to see if it gives me what I need. 
Hi team, In this output, it appears that TLS is enabled based on the following information: XXX.XXX@XXX-XXX-XXX ~ % openssl s_client -connect 1.1.1.1:8088 CONNECTED(00000003) 140704518969088:erro... See more...
Hi team, In this output, it appears that TLS is enabled based on the following information: XXX.XXX@XXX-XXX-XXX ~ % openssl s_client -connect 1.1.1.1:8088 CONNECTED(00000003) 140704518969088:error:1404B42E:SSL routines:ST_CONNECT:tlsv1 alert protocol version:/AppleInternal/Library/BuildRoots/d9889869-120b-11ee-b796-7a03568b17ac/Library/Caches/com.apple.xbs/Sources/libressl/libressl-3.3/ssl/tls13_lib.c:151: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 5 bytes and written 294 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session:     Protocol  : TLSv1.3     Cipher    : 0000     Session-ID:     Session-ID-ctx:     Master-Key:     Start Time: 1705416962     Timeout   : 7200 (sec)     Verify return code: 0 (ok) --- I dont understand but the "Protocol" field indicates TLS version 1.3, and the "Cipher" field would typically show the cipher suite being used. The "Verify return code" of 0 indicates that the certificate verification was successful. However, there is an error related to the TLS protocol version alert, which might be due to a compatibility issue between the OpenSSL version used and the TLS version supported by the server. If this is not causing any problems with the connection, it might be negligible.
Hi @toporagno, At first how are you taking these logs: from a Universal Forwarder or from an Heavy Forwarder or another Splunk server? if you are using a UF, I suppose that you are using a Deployme... See more...
Hi @toporagno, At first how are you taking these logs: from a Universal Forwarder or from an Heavy Forwarder or another Splunk server? if you are using a UF, I suppose that you are using a Deployment Server to manage it, so in the inputs.conf, you could add the sourcetype and the index. IOf instead you are receiving a syslog in an HF, you have to apply the same update to the related inputs.conf. You could alse override the index and sourcetype on the Indexer, or (if present) on the HF, but it's easier modifying the inputs.conf. Ciao. Giuseppe
I was looking for quite a long time but I'm still wondering whether or not the SAAS portfolio is covered by the Spanish ENS . I found that the cloud is ISO 27001 because does the hyperscalers support... See more...
I was looking for quite a long time but I'm still wondering whether or not the SAAS portfolio is covered by the Spanish ENS . I found that the cloud is ISO 27001 because does the hyperscalers supporting it (GCP/AWS) but the Signalfx doesn't seem to be within compliant regarding the use of customers certificates and the lack of native 2FA.
1. Where are you putting those settings to? 2. What format are you ingesting your eventlogs in?  
OK. Wait a second. Do you even have TLS enabled on this port? Check output of openssl s_client -connect your_splunk_ip:8088 for errors as well as check your _internal index for errors regarding yo... See more...
OK. Wait a second. Do you even have TLS enabled on this port? Check output of openssl s_client -connect your_splunk_ip:8088 for errors as well as check your _internal index for errors regarding your client's IP.
Hi @danroberts, I suppose that you're speaking of Windows servers. If I correctly remind, in the Windows perfmon indicators, extracted using the Splunk_TA_Windows, there's also TotalPhysicalMemoryK... See more...
Hi @danroberts, I suppose that you're speaking of Windows servers. If I correctly remind, in the Windows perfmon indicators, extracted using the Splunk_TA_Windows, there's also TotalPhysicalMemoryKB and TotalVirtualMemoryKB tha you could use. Ciao. Giuseppe
OK. We're getting somewhere | inputlookup abc.csv | eval CompanyCode="DSPL" | eventstats values(CompanyCode) as CompanyCode | eval 3Let=case(CompanyCode == "DSDE", "BIE", CompanyCode == "DSDE-... See more...
OK. We're getting somewhere | inputlookup abc.csv | eval CompanyCode="DSPL" | eventstats values(CompanyCode) as CompanyCode | eval 3Let=case(CompanyCode == "DSDE", "BIE", CompanyCode == "DSDE-AS", "PUT", CompanyCode == "DSDE-FS", "STL", CompanyCode == "CSDE", "DAR", CompanyCode == "DSPL", "RAD", CompanyCode == "DSMX", "QUE", CompanyCode == "DSUS", "SSC") | where '3Let'='place' OK. I assume this produces your data set and it works pretty OK. But now if you want to have _all_ events for which a particular field has a value which is max of all possible, you have several options available (for example using subsearches) but the easiest one will be to add an additional field which tells you which value is the max year value. For this we use eventstats. | eventstats max(timeval) as maxyear Now you have an additional field telling you which year is the max year. So now just filter your values to only leave those where your timeval is equal to that maxyear | where timeval=maxyear And you should be all set
Hi,  First of all, thank you for your response, I am sharing the outputs I got when I tried using HTTP and HTTPS below. It may be due to the SSL setting of the Http collector, but I think there will... See more...
Hi,  First of all, thank you for your response, I am sharing the outputs I got when I tried using HTTP and HTTPS below. It may be due to the SSL setting of the Http collector, but I think there will be other logs affected. XXX.XXX@XXX-XXX-XXX ~ % curl -kv http://1.1.1.1:8088/services/collector/raw -H "Authorization: Splunk XXX-XXX-XXX-XXX-XXX" -d '{"event": "cheesecake"}' --insecure * Trying 1.1.1.1:8088... * Connected to 1.1.1.1 (1.1.1.1) port 8088 (#0) > POST /services/collector/raw HTTP/1.1 > Host: 1.1.1.1:8088 > User-Agent: curl/8.1.2 > Accept: */* > Authorization: Splunk XXX-XXX-XXX-XXX-XXX > Content-Length: 23 > Content-Type: application/x-www-form-urlencoded > < HTTP/1.1 200 OK < Date: Tue, 16 Jan 2024 14:31:55 GMT < Content-Type: application/json; charset=UTF-8 < X-Content-Type-Options: nosniff < Content-Length: 27 < Vary: Authorization < Connection: Keep-Alive < X-Frame-Options: SAMEORIGIN < Server: Splunkd < * Connection #0 to host 1.1.1.1 left intact {"text":"Success","code":0}%         XXX.XXX@XXX-XXX-XXX ~ % curl -kv https://1.1.1.1:8088/services/collector/raw -H "Authorization: Splunk XXX-XXX-XXX-XXX-XXX" -d '{"event": "cheesecake"}' --insecure * Trying 1.1.1.1:8088... * Connected to 1.1.1.1 (1.1.1.1) port 8088 (#0) * ALPN: offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): * LibreSSL/3.3.6: error:1404B42E:SSL routines:ST_CONNECT:tlsv1 alert protocol version * Closing connection 0 curl: (35) LibreSSL/3.3.6: error:1404B42E:SSL routines:ST_CONNECT:tlsv1 alert protocol version
We are using perfmon and I have built some dashboards to show memory/cpu usage and alerts that trigger if each is going above a certain %, is there a way you can obtain the total memory assigned to a... See more...
We are using perfmon and I have built some dashboards to show memory/cpu usage and alerts that trigger if each is going above a certain %, is there a way you can obtain the total memory assigned to a server?  What I want to do is to be able to create a table from the total assigned memory and place it in the above dashboards so our testers know how much memory a server has without me manually creating a table with each stat in.
Hello @ITWhisperer, Thank you for your response. There is no distinct observation about the value of the chart on which drilldown does not take place. Moreover, on the chart where issue occurs, whe... See more...
Hello @ITWhisperer, Thank you for your response. There is no distinct observation about the value of the chart on which drilldown does not take place. Moreover, on the chart where issue occurs, when I click over it with mouse, no further activity occurs. I replaced trellis.value with trellis.value|u, refreshed the dashboard, but it still works only for 2 out of 3 visuals from the trellis. Thank you
You seem to be specifying that you want to use SSL (https) but you don't appear to be providing any certificates etc. Have you tried using http instead?
"Seem"?  Either it worked or it didn't. In which file did you add that line?  On which Splunk instance?  Did you restart Splunk after making the change?
| inputlookup abc.csv | eval CompanyCode="DSPL" | eventstats values(CompanyCode) as CompanyCode | eval 3Let=case(CompanyCode == "DSDE", "BIE", CompanyCode == "DSDE-AS", "PUT", CompanyCode == "DSDE... See more...
| inputlookup abc.csv | eval CompanyCode="DSPL" | eventstats values(CompanyCode) as CompanyCode | eval 3Let=case(CompanyCode == "DSDE", "BIE", CompanyCode == "DSDE-AS", "PUT", CompanyCode == "DSDE-FS", "STL", CompanyCode == "CSDE", "DAR", CompanyCode == "DSPL", "RAD", CompanyCode == "DSMX", "QUE", CompanyCode == "DSUS", "SSC") | where '3Let'='place' | sort - timeval | table count timeval | head 1 |appendpipe [stats count | where count==0 | eval timeval=strftime(now(),"%Y") | where count==0]
Is there something particular about the value which doesn't work? Have you tried encoding the value? <link target="_blank">/xxx/yyy/zzz?test_Tok=$trellis.value|u$</link> Token usage in dashboards -... See more...
Is there something particular about the value which doesn't work? Have you tried encoding the value? <link target="_blank">/xxx/yyy/zzz?test_Tok=$trellis.value|u$</link> Token usage in dashboards - Splunk Documentation
i need to change  a indexer for a data send by a universal forward, i've this data source_type="pippo" with sourcetype:"paperino" and index="pluto" so i need to send all of this data in another index... See more...
i need to change  a indexer for a data send by a universal forward, i've this data source_type="pippo" with sourcetype:"paperino" and index="pluto" so i need to send all of this data in another index like index="nino" i try with a props.conf and transforms.conf but it doesn't work
ChromeOS is not a supported operating system nor is a Chromebook likely to have enough disk space or RAM to run Splunk.  You can, however, use a Chromebook to access Splunk Cloud or a Splunk Enterpri... See more...
ChromeOS is not a supported operating system nor is a Chromebook likely to have enough disk space or RAM to run Splunk.  You can, however, use a Chromebook to access Splunk Cloud or a Splunk Enterprise installation on another (supported) machine.
 
Hi team, I'm trying to send a curl request from my local machine to a Splunk server, but I'm encountering the following error. Have you come across this error before? I've found similar issues on st... See more...
Hi team, I'm trying to send a curl request from my local machine to a Splunk server, but I'm encountering the following error. Have you come across this error before? I've found similar issues on stackoverflow, but none of the solutions seem to work for me. I thought reaching out here might provide quick support in case anyone has experienced a specific issue related to this. Thank you in advance for your assistance. aaa.bbb@MyComputer-xxx ~ % curl https://1.1.1.1:8088/services/collector/raw -H "Authorization: Splunk XXXX-XXXX-XXXX-XXXX-XXXX" -d '{"event": "cheesecake"}' --insecure Output: curl: (35) LibreSSL/3.3.6: error:1404B42E:SSL routines:ST_CONNECT:tlsv1 alert protocol version Thanks
I got the same issue and after restarting the whole environment it didn't work, the issue in my case was the EUM database not working properly. the error was  Analytics service unavailable: Host ... See more...
I got the same issue and after restarting the whole environment it didn't work, the issue in my case was the EUM database not working properly. the error was  Analytics service unavailable: Host 'nginx:9080' returned code 401 with message 'Status code: [401], Message: HTTP 401 Unauthorized'. Please contact support if this error persists. FYI: I'm sure it was working well but I was testing stoping EUM and going in the upgrade process before this case. my solution is  ps -ef | grep -i eum | grep -v grep kill -9 eum-PID ps -ef | grep -i database | grep -v grep kill -9 eum-database-PID cd /opt/appdynamics/eum/mysql bin/mysqld_safe --defaults-file=/opt/appdynamics/eum/mysql/db.cnf cd /opt/appdynamics/eum/eum-processor ./bin/eum.sh start You need to make sure no errors go out from the eum start command and run ps command again to make sure the process is running well and sure  analytics.accountAccessKey == ad.accountmanager.key.eum ==appdynamics.es.eum.key as Rayan has mentioned above also, this could happen if the event service is not in a healthy state  in this scenario, you should go  cd /opt/appdynamics/platform/platform-admin/ bin/platform-admin.sh show-events-service-health #in case it's not healthy but every node is running curl http://localhost:9200/_cat/shards?v curl http://localhost:9200/_cat/indices?v #Note to make sure port 9200 is enabled in each event service node by enabling it from each node vi /opt/appdynamics/platform/product/events-service/processor/conf/events-service-api-store.properties ad.es.node.http.enabled=true curl http://localhost:9081/healthcheck?pretty=true # to grep Un Assingned indexes curl -XGET localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason| grep UNASSIGNED #Manually assign each node by curl -XPOST 'localhost:9200/_cluster/reroute' -d '{ "commands" : [ { "allocate" : { "index" : "manually-add-each-index-name", "shard" : 0, "node" : "event-service-ip", "allow_primary" : true } } ]}'