All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The CSV files are generated by automation which generated the server status with filename when the file was generated.  There is not timestamp generated in the file so I have to use file generation t... See more...
The CSV files are generated by automation which generated the server status with filename when the file was generated.  There is not timestamp generated in the file so I have to use file generation time stamp in the naming convention.  
Do you mean you have loaded the csv into a lookup or that the csv has been ingested into an index and there is a source field associated with each event with the file name in?
I have a event that are generated in csv format with timestamp within file name as mentioned below. Need to extract timestamp from the file and create new column as _time. Need rex query to extract t... See more...
I have a event that are generated in csv format with timestamp within file name as mentioned below. Need to extract timestamp from the file and create new column as _time. Need rex query to extract the YYYY-MM-DD HH:MM:SS.   D:\automation\miscprocess\test_utilization_info_20240618_195509.csv
Hello, For the internal indexes of the search head, should we send them to be stored on the indexers? If so, can we send them to both indexers without them being in a cluster?  Additionally, I ha... See more...
Hello, For the internal indexes of the search head, should we send them to be stored on the indexers? If so, can we send them to both indexers without them being in a cluster?  Additionally, I have installed the add-on on the search head, and the index where the collected data is stored is located on the search head at the following path: /opt/splunk/etc/apps/search/local/indexes.conf. How can I direct this index to both indexers that are not in a cluster?
Change your global time zone to be your local time zone e.g. EST. To calculate differences in times you need to parse the strings to epoch format | eval epoch_timestamp=strptime(c_timestamp,"%FT%T.... See more...
Change your global time zone to be your local time zone e.g. EST. To calculate differences in times you need to parse the strings to epoch format | eval epoch_timestamp=strptime(c_timestamp,"%FT%T.%6N%z") | eval local_timestamp=strftime(epoch_timestamp,"%F %T.%6N %Z") | eval epoch_mod=strptime(c_mod,"%FT%T.%6N%z") | eval local_mod=strftime(epoch_mod,"%F %T.%6N %Z") | eval diff=epoch_mod-epoch_timestamp
Hi  @jkamdar check splunkd.log on forwarders, they are usually good to diag. Good luck!
Hello Community, i hope you can support. I have a CloudFoundry Environment which send all logs to my splunk-forwarder on which i have installed syslog-ng 4.6. On the Splunk Server Side the Splunk Ap... See more...
Hello Community, i hope you can support. I have a CloudFoundry Environment which send all logs to my splunk-forwarder on which i have installed syslog-ng 4.6. On the Splunk Server Side the Splunk App for RFC5424 has been installed and configured as documented. My current syslog-ng.conf (without RFC5424) looks as follows (with syslog-ng 3.23):     @version:3.23 options { flush_lines(0); time_reopen(10); log_fifo_size(16384); chain_hostnames(off); use_dns(no); use_fqdn(no); create_dirs(yes); keep_hostname(yes); owner(); dir-owner(); group(); dir-group(); perm(-1); dir-perm(-1); keep-timestamp(no); threaded(yes); }; source s_tcp555 { tcp (ip("0.0.0.0") port(555) keep-alive(yes) max-connections(100) log-iw-size(10000)); }; destination env_logs { file("/var/log/syslog2splunk/env/${LOGHOST}/${HOST}/${YEAR}-${MONTH}-${DAY}_${HOUR}.log" template("${UNIXTIME} ${MSGHDR} ${MESSAGE}\n") frac-digits(3) time_zone("UTC") owner("splunk") dir-owner("splunk") group("splunk") dir-group("splunk")); }; log { source(s_tcp514); destination(env_logs); };       The inputs.conf:     [default] host = my-splk-fwd index = <my-splk-index-xxx> [monitor:///var/log/syslog2splunk/env/*/*/*.log] disabled = false sourcetype = CF:syslog host_segment = 6 crcSalt = <SOURCE>       You see that my CloudFoundry Environment is sending syslog over port 514 to the splunk forwarder which is then shipping them to the splunk server. Now i have configured RFC5424 in syslog-ng.conf and also in the inputs.conf. My CF syslogs should be only formatted to RFC5424 and therefore i do not want to have in my syslog-ng.conf 2 sources/destinations and a new port. I would only like that my current syslogs will be formatted to rfc5424. But i also know that in the inputs.conf its not possible to configure 2 sourcetypes. So therefore i need to know how to configure those both files that my almost incoming syslog files will be formatted with rfc5424. I do not want to have two directories with exactly the same logs.   Here is my syslog-ng.conf (with syslog-ng 4.6):     @version: 4.6 options { flush_lines(0); time_reopen(10); log_fifo_size(16384); chain_hostnames(off); use_dns(no); use_fqdn(no); create_dirs(yes); keep_hostname(yes); owner(); dir-owner(); group(); dir-group(); perm(-1); dir-perm(-1); keep-timestamp(no); threaded(yes); }; source s_tcp514 { tcp (ip("0.0.0.0") port(514)); }; destination env_logs { file("/var/log/syslog2splunk/env/${LOGHOST}/${HOST}/${YEAR}-${MONTH}-${DAY}_${HOUR}.log" template("<${PRI}>1 ${ISODATE} ${HOST} ${PROGRAM} ${PID} ${MSGID} ${STRUCTURED-DATA} ${MESSAGE}\n") frac-digits(3) time_zone("UTC") owner("splunk") dir-owner("splunk") group("splunk") dir-group("splunk")); }; destination rfc5424_logs { file("/var/log/syslog2splunk/rfc5424/${LOGHOST}/${HOST}/${YEAR}-${MONTH}-${DAY}_${HOUR}.log" template("<${PRI}>1 ${ISODATE} ${HOST} ${PROGRAM} ${PID} ${MSGID} ${STRUCTURED-DATA} ${MESSAGE}\n") frac-digits(3) time_zone("UTC") owner("splunk") dir-owner("splunk") group("splunk") dir-group("splunk")); }; # Log routing log { source(s_tcp514); destination(env_logs); };     ==> do i need here to add an additional source/destination or is this conf ok? The new inputs.conf looks as follows:     [default] host = my-splk-fwd index = my-splk-index_xxx [monitor:///var/log/syslog2splunk/env/*/*/*.log] disabled = false sourcetype = ENV:syslog host_segment = 6 crcSalt = <SOURCE> [monitor:///var/log/syslog2splunk/rfc5424/*/*/*.log] disabled = false sourcetype = rfc5424_syslog host_segment = 6 crcSalt = <SOURCE>     With the syslog-ng.conf and inputs.conf i can see the source-type for rfc but from my opinion it is exactly the same output as before - so i do not recognize any difference.  
  Able to get event output in table format. But looking for eval condition: 1. Remove T from the timestamp and convert the below UTC/GMT to EST and need this in YYYY-MM-DD HH:MM:SS 2. And need the... See more...
  Able to get event output in table format. But looking for eval condition: 1. Remove T from the timestamp and convert the below UTC/GMT to EST and need this in YYYY-MM-DD HH:MM:SS 2. And need the time different between c_timestamp and c_mod and add the time difference in Timetaknen column.  
I am working on one Bug. In TAV dashboard Graphs are not visible in the CFF IT/Business KPIs. After my initial analysis I found that data came from "get_cff_trends" macros, and this macros is not ret... See more...
I am working on one Bug. In TAV dashboard Graphs are not visible in the CFF IT/Business KPIs. After my initial analysis I found that data came from "get_cff_trends" macros, and this macros is not returning any values. So, I starts validating the "get_cff_trends" macros  code.   Query :   | mstats latest(avg.alert_*) as latest.alert_* avg(avg.alert_*) as avg.alert_* sum(sum.alert_*) as sum.alert_* WHERE source="iobserve_v5" AND index="em_metrics" AND ( service="TA:CFF:Business:Sweden" AND kpi="ServiceHealthScore" ) OR ( service="TA:CFF:Business Orders Created" AND kpi="Orders count - Total" ) OR ( service="TA:CFF:Business Work Orders Fulfilled" AND kpi="Orders fulfilled in last 1 hr" ) OR ( service="TA:CFF:Business Work Orders Delivered" AND kpi="Orders Delivered*" ) OR ( service="TA:CFF:Business Work Orders Released" AND kpi="Released Orders - Nr Orders In Latest Release" ) earliest="1718179949.136" latest="1718179949.136" span="10m" BY kpi service | eval alert_value='avg.alert_value', alert_level=round('avg.alert_level',0) | eval value = if(kpi like "%Order%" , 'sum.alert_value', alert_value) | stats avg(value) as avgValue by _time service,kpi | eval avgValue=round(avgValue,0), minValue=round(minValue,2), maxValue=round(maxValue,2), dday=strftime('_time',"%Y-%m-%d") | eval avgValue = if( isnull(mvfind(_time, all_times)), 0, mvindex(avgValue,mvfind(_time, all_times))) | fillnull value="N/A" | stats list(avgValue) as avgValue values(all_times) as _time by service kpi | eval avgValue=mvjoin(avgValue,",") | eval unit=case(like(lower(kpi),"%percent%"),"%", like(lower(kpi),"%conversion%"),"%", like(lower(kpi),"%syncronisation%"),"%", like(lower(kpi),"%availability%"),"%", like(lower(kpi),"%order%"),"#", like(kpiid,"SHKPI%"),"%", like(lower(kpi),"%lead time%"),"days", like(lower(kpi),"%size%"),"#", like(lower(kpi),"%price%"),"#", like(lower(kpi),"%cff%"),"%", like(lower(kpi),"%sample%"),"#", like(lower(kpi),"%calls%"),"#", like(lower(kpi),"%transactions%"),"#", like(lower(kpi),"%sessions%"),"#", like(lower(kpi),"%error%"),"#", like(lower(kpi),"%checkouts%"),"#", like(lower(kpi),"%response time%"),"ms", like(lower(service),"%data quality%"),"%", true(),"%") | eval display_name=case(kpi like "ServiceHealthScore", "Fulfillment Flow Health", kpi like "Orders count - Total%", "Orders created", kpi like "Orders Delivered*%", "Orders delivered*", kpi like "Orders fulfilled in last 1 hr%", "Orders fulfilled*", kpi like "Released Orders - Nr Orders In Latest Release", "Orders released", true(),kpi) | appendcols     [| inputlookup slack_incidents.csv]        In this query we found ,when we are using "_time" in our query its not returnning value and if we remove "_time" than query returns value upto 9th lines but If we run whole query without "_time" its not returned any value. also if we run query with "_time"  than also it not return values. Can you please help me to resolve this issue.
Hi i am new to splunk. As I am trying to integrate splunk with sentinelone, I found it frustrated to find which api key/token should I use... ( The SDL one or Management Console one). Also, I cannot ... See more...
Hi i am new to splunk. As I am trying to integrate splunk with sentinelone, I found it frustrated to find which api key/token should I use... ( The SDL one or Management Console one). Also, I cannot find what the url and name should be under the Application Configuration page in Splunk. Hope you can help... Many thanks
Hi @jkamdar, please perform this tests: index=_internal host=<one_of_the_missing_hosts> if you have logs the connection is OK. If the connection is not OK, please on the missing forwarder try: t... See more...
Hi @jkamdar, please perform this tests: index=_internal host=<one_of_the_missing_hosts> if you have logs the connection is OK. If the connection is not OK, please on the missing forwarder try: telnet <ip_spunk_server> 9997 if it cannot connect, there a route issue, maybe there are local or network firewalls. if instead you have internal logs, you should check on the forwarder if the user you're using to run Splunk has the grants to read the files and obviously if the paths of the files to read are correct. Ciao. Giuseppe
Hi i am new to splunk. As I am trying to integrate splunk with sentinelone, I found it frustrated to find which api key/token should I use... ( The SDL one or Management Console one). Also, I cannot ... See more...
Hi i am new to splunk. As I am trying to integrate splunk with sentinelone, I found it frustrated to find which api key/token should I use... ( The SDL one or Management Console one). Also, I cannot find what the url and name should be under the Application Configuration page in Splunk. Hope you can help... Many thanks
A lesson for everyone who post in this board: Carefully copy raw text in pure text format.  Your initial post contains so many control characters it took me an hour to clean up and reconstruct valid ... See more...
A lesson for everyone who post in this board: Carefully copy raw text in pure text format.  Your initial post contains so many control characters it took me an hour to clean up and reconstruct valid JSON. Once JSON is cleaned up, the solution is really simple.   | spath input=message path={} | mvexpand {} | spath input={} | fields SKIPPED PROCESSED ARUNAME | stats sum(*) as * by ARUNAME   This should give you ARUNAME PROCESSED SKIPPED CPW_00410 4 1 CPW_ARO_H 139 29 CPW_ARO_P 8 36 Here is the emulation that finally reached.  Play with it and compare with real data   | makeresults | eval _raw="{ \"id\": \"0\", \"severity\": \"Information\", \"message\": \"[{\\\"TARGETSYSTEM\\\":\\\"CPW\\\",\\\"ARUNAME\\\":\\\"CPW_ARO_P\\\",\\\"TOTAL\\\":0,\\\"PROCESSED\\\":0,\\\"REMAINING\\\":0,\\\"ERROR\\\":0,\\\"FAILED\\\":0,\\\"SKIPPED\\\":36,\\\"PROCESSING\\\":0,\\\"DATE\\\":\\\"6/27/2024\\\",\\\"DAYHOUR\\\":23},{\\\"TARGETSYSTEM\\\":\\\"CPW\\\",\\\"ARUNAME\\\":\\\"CPW_ARO_P\\\",\\\"TOTAL\\\":0,\\\"PROCESSED\\\":8,\\\"REMAINING\\\":0,\\\"ERROR\\\":0,\\\"FAILED\\\":0,\\\"SKIPPED\\\":0,\\\"PROCESSING\\\":0,\\\"DATE\\\":\\\"6/27/2024\\\",\\\"DAYHOUR\\\":23},{\\\"TARGETSYSTEM\\\":\\\"CPW\\\",\\\"ARUNAME\\\":\\\"CPW_ARO_P\\\",\\\"TOTAL\\\":0,\\\"PROCESSED\\\":0,\\\"REMAINING\\\":0,\\\"ERROR\\\":1,\\\"FAILED\\\":0,\\\"SKIPPED\\\":0,\\\"PROCESSING\\\":0,\\\"DATE\\\":\\\"6/27/2024\\\",\\\"DAYHOUR\\\":23},{\\\"TARGETSYSTEM\\\":\\\"CPW\\\",\\\"ARUNAME\\\":\\\"CPW_00410\\\",\\\"TOTAL\\\":0,\\\"PROCESSED\\\":4,\\\"REMAINING\\\":0,\\\"ERROR\\\":0,\\\"FAILED\\\":0,\\\"SKIPPED\\\":0,\\\"PROCESSING\\\":0,\\\"DATE\\\":\\\"6/27/2024\\\",\\\"DAYHOUR\\\":23},{\\\"TARGETSYSTEM\\\":\\\"CPW\\\",\\\"ARUNAME\\\":\\\"CPW_ARO_H\\\",\\\"TOTAL\\\":0,\\\"PROCESSED\\\":0,\\\"REMAINING\\\":0,\\\"ERROR\\\":0,\\\"FAILED\\\":0,\\\"SKIPPED\\\":29,\\\"PROCESSING\\\":0,\\\"DATE\\\":\\\"6/27/2024\\\",\\\"DAYHOUR\\\":23},{\\\"TARGETSYSTEM\\\":\\\"CPW\\\",\\\"ARUNAME\\\":\\\"CPW_00410\\\",\\\"TOTAL\\\":0,\\\"PROCESSED\\\":0,\\\"REMAINING\\\":0,\\\"ERROR\\\":0,\\\"FAILED\\\":0,\\\"SKIPPED\\\":1,\\\"PROCESSING\\\":0,\\\"DATE\\\":\\\"6/27/2024\\\",\\\"DAYHOUR\\\":23},{\\\"TARGETSYSTEM\\\":\\\"CPW\\\",\\\"ARUNAME\\\":\\\"CPW_ARO_H\\\",\\\"TOTAL\\\":0,\\\"PROCESSED\\\":139,\\\"REMAINING\\\":0,\\\"ERROR\\\":0,\\\"FAILED\\\":0,\\\"SKIPPED\\\":0,\\\"PROCESSING\\\":0,\\\"DATE\\\":\\\"6/27/2024\\\",\\\"DAYHOUR\\\":23}]\" }" | spath ``` data emulation above ```  
As the title suggests I have a dashboard with various panels and wondering if it's possible to export a single panel and all its contents (including token values) XML to JavaScript.  
The IN operator is not supported by the where command.  You can use IN with the search command or the in() function with the where command.  In this case, however, the IN is not needed if the subsear... See more...
The IN operator is not supported by the where command.  You can use IN with the search command or the in() function with the where command.  In this case, however, the IN is not needed if the subsearch is part of the base search (before the first pipe).     index=provisioning_index cf_org_name=abcd cf_app_name=xyz "ReconCount:" jobNumber [search index=provisioning_index cf_org_name=abcd cf_app_name=xyz operation="operation1" status=SUCCESS |search NOT jobType="Canc"|table jobNumber ] | stats count by deliveryInd | addcoltotals      
Nevermind All.     By some miracle I figured it out! | eval ProdCode = replace(ProdCode,"^(\d)\d{3}-\d(\d{3})","\1xxx-x\2")
Hello  I know its already 2 weeks but still waiting for answer can any one help me out
Hello,  I'm fairly new to splunk, trying to search using where clause and filter the results. The query is running long, wondering if i'm not doin this right. a tone down version of the search: ... See more...
Hello,  I'm fairly new to splunk, trying to search using where clause and filter the results. The query is running long, wondering if i'm not doin this right. a tone down version of the search: index=provisioning_index cf_org_name=abcd cf_app_name=xyz "ReconCount:" |where jobNumber IN ([search index=provisioning_index cf_org_name=abcd cf_app_name=xyz operation="operation1" status=SUCCESS |search NOT jobType="Canc"|table jobNumber ]) |stats count by deliveryInd | addcoltotals
In Splunk, I added an AWS add-on and tried to get data from AWS S3. While creating the input, it took the sourcetype as aws:s3:csv by default, and I was receiving the data properly. However, I accide... See more...
In Splunk, I added an AWS add-on and tried to get data from AWS S3. While creating the input, it took the sourcetype as aws:s3:csv by default, and I was receiving the data properly. However, I accidentally changed the configuration for the aws:s3:csv sourcetype, and now the logs are not being received correctly. Can anyone help me by providing the default configuration for this sourcetype?"
Here is the answer: https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Web-featuresconf#.5Bfeature:dashboards_csp.5D in web-features.conf, there is a stanza called  [feature:dashboards_csp... See more...
Here is the answer: https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Web-featuresconf#.5Bfeature:dashboards_csp.5D in web-features.conf, there is a stanza called  [feature:dashboards_csp] where you can allow list domains like this: dashboards_trusted_domain.<name> = <string> aka dashboards_trusted_domain.smartsheet = app.smartsheet.com