All Topics

Top

All Topics

I need to report hosts that are configured to receive app.log details and also report the ones that are missing. For this, I use the query "index=application sourcetype="application:server:log" | sta... See more...
I need to report hosts that are configured to receive app.log details and also report the ones that are missing. For this, I use the query "index=application sourcetype="application:server:log" | stats values(host) as hostnames by customer_name". This retrieves the hostnames for each customer_name from the sourcetype.   I get a result as: customer_name host customer1 server1 customer2 server2 server3   Then, I join the result by customer_name field from the second part of the query "[| inputlookup server_info.csv | rename customer as customer_name | stats values(host) as hostnames_lookup by customer_name] which retrieves the hostnames for each customer_name from the server_info.csv lookup table."   Here I get result as: customer_name host customer_name host customer1 server1 server100 customer2 server2 server3 server101 Later, I expand both the multivalue fields and perform evaluation on both the fields to retrieve result as configured or not configured. The evaluation looks like this | mvexpand hostnames | mvexpand hostnames_lookup | eval not_configured = if(hostnames == hostnames_lookup, hostnames, null()) | eval configured = if(hostnames != hostnames_lookup, hostnames, null()) | fields customer_name, hostnames, hostnames_lookup, configured, not_configured   My final query looks like this: (index=application sourcetype="application:server:log) | stats values(host) as hostnames by customer_name | join customer_name [| inputlookup server_info.csv | rename customer as customer_name | stats values(host) as hostnames_lookup by customer_name] | mvexpand hostnames | mvexpand hostnames_lookup | eval not_configured = if(hostnames == hostnames_lookup, hostnames, null()) | eval configured = if(hostnames != hostnames_lookup, hostnames, null()) | fields customer_name, hostnames, hostnames_lookup, configured, not_configured However, in the result when the evaluation is completed the results are not as expected, the matching logic doesn't work and the resultant output is incorrect. There are no values evaluated in the not_configured column and the configured column only returns the values in hostnames. However, I'd expect the configured field to show results of all the servers configured to receive app.log and not configured to have hostnames that are present in lookup but are still not configured to receive logs.  Expected Output: customer_name hostnames hostnames_lookup configured not_configured customer1 server1 server1 server100 server1 server100 customer2 server2 server3 server2 server3 server101 server2 server3 server101   Current Output: customer_name hostnames hostnames_lookup configured not_configured customer1 server1 server1 server100 server1   customer2 server2 server3 server2 server3 server101 server2 server3     Essentially customer1 and customer2 should display server1 as configured and server100 not_configured and likewise for customer2 as mentioned in expected output table. Which will mean that server100 and 101 are part of the lookup but are not configured to receive app.log How can I evaluate this differently, so that the comparison works as expected. Is it possible to compare the values in this fashion? Is there anything wrong with the current comparison logic? Should I not use mvexpand on the extracted fields so that they are compared expectedly?        
We have an issue with long JSON log events, which is longer than console width limit - they are splitted to 2 separate events, each of them is not a correct JSON. How to handle it correctly? Is it po... See more...
We have an issue with long JSON log events, which is longer than console width limit - they are splitted to 2 separate events, each of them is not a correct JSON. How to handle it correctly? Is it possible to restore broken messages on splunk side, or we need to reach logger to know about width limitation and chunk messages in a proper way? How to handle large JSON events?
Hello! I would like to run a search which would display all information regarding entities and services. For example, for Entities where could I find information stored for: Entity Description,... See more...
Hello! I would like to run a search which would display all information regarding entities and services. For example, for Entities where could I find information stored for: Entity Description, Entity Information Field, Entity Title. For Services, where could I find information stored for: Service Description, Service Title, Service Tags What type of search query could I run to find this information? Thanks,
Thanks in Advance. I have four inputs Time,Environment,Application Name and Interface Name and two panels one is fiance and bank.Both panels has different applications name and interface names.So i ... See more...
Thanks in Advance. I have four inputs Time,Environment,Application Name and Interface Name and two panels one is fiance and bank.Both panels has different applications name and interface names.So i tried to use depends and reject in the inputs.If i change one panel to another the inputs like dropdown and text box remains same but the values need to be change as per the panels. <row> <panel id="panel_layout"> <input id="input_link_split_by" type="link" token="tokSplit" searchWhenChanged="true"> <label></label> <choice value="Finance">OVERVIEW</choice> <choice value="BankIntegrations">BANKS</choice> <default>OVERVIEW</default> <initialValue>OVERVIEW</initialValue> <change> <condition label="Finance"> <set token="Finance">true</set> <unset token="BankIntegrations"></unset> </condition> <condition label="BankIntegrations"> <set token="BankIntegrations">true</set> <unset token="Finance"></unset> </condition> <row> <panel> <input type="time" token="time" searchWhenChanged="true"> <label>Time Interval</label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="env" searchWhenChanged="true"> <label>Environment</label> <choice value="*">ALL</choice> <choice value="DEV">DEV</choice> <choice value="TEST">TEST</choice> <choice value="PRD">PRD</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="dropdown" token="applicationName" searchWhenChanged="true" depends="$Finance$" rejects="$BankIntegrations$"> <label>ApplicationName</label> <choice value="*">ALL</choice> <choice value="p-wd-finance-api">p-wd-finance-api</choice> <default>"p-wd-finance-api</default> <initialValue>p-oracle-fin-processor","p-oracle-fin-processor-2","p-wd-finance-api</initialValue> <fieldForLabel>ApplicationName</fieldForLabel> <fieldForValue>ApplicationName</fieldForValue> </input> <input type="text" token="InterfaceName" searchWhenChanged="true" depends="$Finance$" rejects="$BankIntegrations$"> <label>InterfaceName</label> <default></default> <initialValue></initialValue> </input> <input type="dropdown" token="applicationName" searchWhenChanged="true" depends="$BankIntegrations$" rejects="$Finance$"> <label>ApplicationName</label> <choice value="p-wd-finance-api">p-wd-finance-api</choice> <default>p-oracle-fin-processor","p-oracle-fin-processor-2","p-wd-finance-api</default> <initialValue>p-oracle-fin-processor","p-oracle-fin-processor-2","p-wd-finance-api</initialValue> <fieldForLabel>ApplicationName</fieldForLabel> <fieldForValue>ApplicationName</fieldForValue> </input> <input type="text" token="InterfaceName" searchWhenChanged="true" depends="$BankIntegrations$" rejects="$Finance$"> <label>InterfaceName</label> <default></default> <initialValue></initialValue> </input> </panel> </row>  
Hi, I have the raw data/Event as below, the splunk gets the rawdata  every 2 hrs once and only 4 time a day. This runs at 11.36 AM ,13.36,15:36 PM, 17:36 PM splunk gets the rawdata. Per day i am ge... See more...
Hi, I have the raw data/Event as below, the splunk gets the rawdata  every 2 hrs once and only 4 time a day. This runs at 11.36 AM ,13.36,15:36 PM, 17:36 PM splunk gets the rawdata. Per day i am getting ~2.5K events  Field:DATETIME , tells what time the job run 2024-04-15 21:36:58.960, DATETIME="2024-04-15 17:36:02", REGION="India", APPLICATION="webApp", CLIENT_CODE="ind", MARKET_CODE="SEBI", TRADE_COUNT="1" What I am looking is when i run the dashboard, where I want to monitor the trade count by market_code over latest DATETIME. For instance, if i run the dashboard at 14:00 hrs, the field DATETIME might have 11.36 (~600 events), 13.36(~600 events). I want to see only 13.36hrs 600 events, and metric would be TRADE_COUNT by MARKET_CODE Thanks, Selvam.
Hello Team, I am trying for a solution using multiselect input filter where the index token is passed to panels. From the below code, I currently see the filter values "Stack1", "Stack2" and "Stack... See more...
Hello Team, I am trying for a solution using multiselect input filter where the index token is passed to panels. From the below code, I currently see the filter values "Stack1", "Stack2" and "Stack3". But I face an issue that the value passed is from label. I need the index_tkn to hold value aws_stack02_p, aws_stack01_p, aws_stack01`_n. <input type="multiselect" token="index_tkn" searchWhenChanged="false"> <label>Select Stack</label> <valuePrefix>index="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <fieldForLabel>index</fieldForLabel> <fieldForValue>label</fieldForValue> <search> <query>index IN ({aws_stack02_p,aws_stack01_p,aws_stack01_n}) | eval label = case(index == "aws_stack02_p", "Stack1",index == "aws_stack01_p", "Stack2",index == "aws_stack01_n", "Stack3") |stats count by label</query> <earliest>$time_tkn.earliest$</earliest> <latest>$time_tkn.latest$</latest> </search> </input>  
i am using splunk cloud and design is UF > hf>splunk CLOUD  in HF"S we have outputs file like below      i have below splunk configuration in outputs.conf file in heavy forwarder here sslPasswor... See more...
i am using splunk cloud and design is UF > hf>splunk CLOUD  in HF"S we have outputs file like below      i have below splunk configuration in outputs.conf file in heavy forwarder here sslPassword is same for all HF"S if i am using multiple heavy forwarders root@hostname:/opt/splunk/etc/apps/100_stackname_splunkcloud/local # cat outputs.conf [tcpout] sslPassword = 27adhjwgde2y67dvff3tegd36scyctefd73****************** channelReapLowater = 10 channelTTL = 300000 dnsResolutionInterval = 300 negotiateNewProtocol = true socksResolveDNS = false useClientSSLCompression = true negotiateProtocolLevel = 0 channelReapInterval = 60000 tcpSendBufSz = 5120000 useACK = false [tcpout:splunkcloud] useClientSSLCompression = true maxQueueSize = 250MB autoLBFrequency = 300  
My row data will look like below _row= {"id":"0","severity":"Information","message":"CPW Total= 844961,SEQ Total =244881, EAS Total=1248892, VRS Total=238, CPW Remaining=74572, SEQ Remaining=22, EAS... See more...
My row data will look like below _row= {"id":"0","severity":"Information","message":"CPW Total= 844961,SEQ Total =244881, EAS Total=1248892, VRS Total=238, CPW Remaining=74572, SEQ Remaining=22, EAS Remaining =62751, VRS Remaining =0, InvetoryDate =4/15/2024 6:16:07 AM"} I want to extract fields from message and it will look like below. I tried the through rgex but I am unable to extract. Please help to create extract for    CPW Total SEQ Total EAS Total VRS Total CPW Remaining SEQ Remaining EAS Remaining VRS Remaining InvetoryDate 844961 244881 1248892 238 74572 22 62751 0 4/15/2024 6:16:07 AM  
I am trying to build some modular documentation as a Splunk app on a site with a indexer- and search head cluster.  Some of the reasoning behind this is that I spend quite some time researching exist... See more...
I am trying to build some modular documentation as a Splunk app on a site with a indexer- and search head cluster.  Some of the reasoning behind this is that I spend quite some time researching existing configuration when I'm about to make new changes .  Thus I would like to be able to create views showing me details from props, transforms and indexes on the search heads. My question is; do you see any potential pitfalls by having the configuration on search heads as well as the indexers?  Or, are there any other solution for being able to view configuration on the indexer peers from the search heads? Cheers!
@all When I'm trying to install and configure #otel collector to send data from agent mode to gateway collector  in #Splunk Observability cloud, I'm facing many challenges not able to connect data t... See more...
@all When I'm trying to install and configure #otel collector to send data from agent mode to gateway collector  in #Splunk Observability cloud, I'm facing many challenges not able to connect data to send agent with gateway. Can anyone guide me how to solve this issue
I'm sure someone here has worked on a powershell script to install splunk to different windows hosts remotely. Can I get help with that? my powershell skills are really weak.
Hello,  1. Is there an option (built in or manually built) for a container to view history of the older containers with the same artifacts and details ? It can make an analyst work easier to see not... See more...
Hello,  1. Is there an option (built in or manually built) for a container to view history of the older containers with the same artifacts and details ? It can make an analyst work easier to see notes and how the older case was solved.  2. by enabling “logging” for a playbook, where opt logs are stored to access later on (beside vie debugging in the UI..)   thank you in advance!
Hi   I finished upgrading Splunk ES to 7.3.0 on 1 of 2 non-clustered Search Heads and I receive this error on the Search Head Post Install Configuration wizard menu "Error in 'essinstall' command: ... See more...
Hi   I finished upgrading Splunk ES to 7.3.0 on 1 of 2 non-clustered Search Heads and I receive this error on the Search Head Post Install Configuration wizard menu "Error in 'essinstall' command: Automatic SSL enablement is not permitted on the deployer". Splunk support have recommened to change the setting on web.conf to "splunkdConnectionTimeout = 3000", which I added to the system file and restarted the splunkd. Unforutnately this timeout setting does not help fix this "known issue". I have selected Enable SSL option in the Post Config Process as I know that SSL is enabled in both the Deployer and SH web configs. If anyone has a work around for this or can suggest how I can enable SSL after the post configuration of Splunk ES on both the SH and Deployer, it would be appreciated.   Thanks
index=app-logs sourcetype=app-data source=*app.logs*  host=appdatajs01 OR host=appdatajs02 OR host=appdatajs03 OR host=appdatajs04 |stats count by host |where count < 100 |bin span=1m _time W... See more...
index=app-logs sourcetype=app-data source=*app.logs*  host=appdatajs01 OR host=appdatajs02 OR host=appdatajs03 OR host=appdatajs04 |stats count by host |where count < 100 |bin span=1m _time We have an alert with the above query,  Alert is getting triggered when the count of hosts is less than 100. but not getting triggered when the count of any  host is zero. How to make the alert to trigger even when the count=0
I have two logs below, log a is throughout the environment and would be shown for all users.  log b is limited to specific users.  I only need times for users in log b. log a:  There is a file has ... See more...
I have two logs below, log a is throughout the environment and would be shown for all users.  log b is limited to specific users.  I only need times for users in log b. log a:  There is a file has been received with the name test2.txt lob b:  The file has been found at the second destination C://user/test2.txt I am trying to write a query that captures the time between log a and log b without doing a subsearch, so far I have  index=a, env=a, account=a ("There is a file" OR "The file has been found")|field filename from log b | field filename2| eval Endtime = _time | ****Here is where I am lost, I was hoping to use if/match/like/eval to see to capture the start time where log b filename can be found in log a.  I have this so far******   | eval Starttime = if(match(filename,"There is%".filename2."%"),_time,0) I am not getting any 1s, just 0s.  I am pretty sure this is the problem "There is%".filename2."%", how do I correct it.
The event.url field stores all the urls found in the logs, I want to create a new field called url_domain that only captures the domain of the urls stored in event.url, temporarily what I do is from ... See more...
The event.url field stores all the urls found in the logs, I want to create a new field called url_domain that only captures the domain of the urls stored in event.url, temporarily what I do is from the search write the following: | rex field=event.url "^(?:https?:\/\/)?(?:www[0-9]*\.)?(?)(?<url_domain>[^\n:\/]+)" What should I add in the props.conf so that this instruction is fixed for the sourcetype "sec-web"?
So, I created at savedsearch and it was working fine. But I had to change the SPL for it and I tried it again, and it is still showing the old results and not showing the new SPL changes. Why? Do I h... See more...
So, I created at savedsearch and it was working fine. But I had to change the SPL for it and I tried it again, and it is still showing the old results and not showing the new SPL changes. Why? Do I have to wait for the changes t happen?
Hello Fellow Splunkers, I'm fairly new to ITSI and was wondering if this could be achieved. I 'm looking to create a report which would allow me to list all Services I have in ITSI along with th... See more...
Hello Fellow Splunkers, I'm fairly new to ITSI and was wondering if this could be achieved. I 'm looking to create a report which would allow me to list all Services I have in ITSI along with their associated entities as well as list associated alerts or severity. Is there a query that could achieve this? any pointers are very much appreciated! Also any pointers where I could potentially find the data and bring it together in a search would be very helpful too. Thanks!
How do I take a dashboard global time (i.e. - $global_time.earliest$, $global_time.latest$) and convert it into a date to be used when searching a lookup file that only has a date column (i.e. - 04/1... See more...
How do I take a dashboard global time (i.e. - $global_time.earliest$, $global_time.latest$) and convert it into a date to be used when searching a lookup file that only has a date column (i.e. - 04/15/2024)?
We need to easily identify the SQL submitted by DB Connect. We'd like to use Oracle's SET_MODULE procedure. How do we accomplish this in DB Connect? call DBMS_APPLICATION_INFO.SET_MODULE ( module... See more...
We need to easily identify the SQL submitted by DB Connect. We'd like to use Oracle's SET_MODULE procedure. How do we accomplish this in DB Connect? call DBMS_APPLICATION_INFO.SET_MODULE ( module_name => 'Splunk_HF', action_name => 'DMP_Dashboard' ); <put our DB Input SQL here>