All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

 Can you share some events which are not in the count?
Hi @Real_captain , yes, in the first dropdown insert the three lookup values and use the token in the second dropdown, something like this: <input type="dropdown" token="first" searchWhenChanged="t... See more...
Hi @Real_captain , yes, in the first dropdown insert the three lookup values and use the token in the second dropdown, something like this: <input type="dropdown" token="first" searchWhenChanged="true"> <search> <query> | makeresults | eval lookup=lookup1.csv | fields lookup | append [ | makeresults | eval lookup=lookup2.csv | fields lookup ] | append [ | makeresults | eval lookup=lookup3.csv | fields lookup ] </query> <earliest>$Time.earliest$</earliest> <latest>$Time.latest$</latest> </search> <fieldForLabel>lookup</fieldForLabel> <fieldForValue>lookup</fieldForValue> </input> <input type="dropdown" token="lookup" searchWhenChanged="true"> <search> <query> | inputlookup $first$ | fields fieldA </query> <earliest>$Time.earliest$</earliest> <latest>$Time.latest$</latest> </search> <fieldForLabel>fieldA</fieldForLabel> <fieldForValue>fieldA</fieldForValue> </input> in this way, using the first dropdown you select the lookup and using the second dropdown you choose the value in the lookup. If you have different fields in the lookups, you have to normalize them using the same field names (using rename). Ciao. Giuseppe
Hi @alexeysharkov , I suppose that _time corresponds to the <local_time>. please another stupid try: rename log.bankCode in log_bankCode before timecharting and then use this field in the timechart... See more...
Hi @alexeysharkov , I suppose that _time corresponds to the <local_time>. please another stupid try: rename log.bankCode in log_bankCode before timecharting and then use this field in the timechart. could you share your events, with also the _time field? Ciao. Giuseppe
Hi @Karthikeya , did also create fields.conf in the indexers, as described in the above link? Ciao. Giuseppe
Sweet, its storing the color in $color|s$ and displaying it in the "Free Space (%)" column cells now which is what I need. But it only evaluates for the first color value in the column not for each ... See more...
Sweet, its storing the color in $color|s$ and displaying it in the "Free Space (%)" column cells now which is what I need. But it only evaluates for the first color value in the column not for each individual field value. So the whole column is a single color. Is there a way of making it evaluate for each field value?
HI Team  Can you please let me know if it is possible to display the different CSV files based on the drilldown value selected in parent table.  Example:  I have a search panel with the below dr... See more...
HI Team  Can you please let me know if it is possible to display the different CSV files based on the drilldown value selected in parent table.  Example:  I have a search panel with the below drilldown that set the value of the Application clicked in the parent dashboard:  <drilldown> <condition match="isnotnull($raw_hide$)"> <unset token="raw_hide"></unset> <unset token="raw_APPLICATION"></unset> </condition> <condition> <set token="raw_hide">true</set> <set token="raw_APPLICATION">$row.APPLICATION$</set> </condition> </drilldown> Based on the value of the APPLICATION clicked on the parent Dashboard, i want to display the corresponding csv.  If Application = "X", then i want to use the command ,  | inputlookup append=t X.csv  If Application = "Y", then i want to use the command ,  | inputlookup append=t Y.csv  If Application = "Z", then i want to use the command ,  | inputlookup append=t Z.csv  OR  Is it possible to display 3 different panels based on the APPLICATION selected in the parent Dashboard.  i.e based on the value of the token set in the <drilldown> of the parent dashboard , can we display the different panel using <panel depends="$tokenset$"> Panel 1 using X.csv   <panel depends="$tokensetX$"> Panel 2 using Y.csv   <panel depends="$tokensetY$"> Panel 3 using Z.csv   <panel depends="$tokensetZ$">  
Hi everyone, does anybody know if there is a possiblity to set the dropdown-width of an input in Dashboard Studio ? This wasn´t a big deal with simple xml and css within the classic dashboard. Tha... See more...
Hi everyone, does anybody know if there is a possiblity to set the dropdown-width of an input in Dashboard Studio ? This wasn´t a big deal with simple xml and css within the classic dashboard. Thanks to all in advance  
change search without table - useless Raw data in first message. Just simple XML source <log><local_time>2025-02-25T17:02:59:979253+05:00</local_time><bik>TSESKZKA</bik><fileName>stmt_410288050... See more...
change search without table - useless Raw data in first message. Just simple XML source <log><local_time>2025-02-25T17:02:59:979253+05:00</local_time><bik>TSESKZKA</bik><fileName>stmt_4102880506.pdf</fileName><size>238529</size><iin>780515303362</iin><agrementNumber>4102880506</agrementNumber><agrementDate>08.09.2021</agrementDate><referenceId>HKBRZA0000388353</referenceId><bankCode>Jysan bank</bankCode><result>OK</result></log>   <log><local_time>2025-02-25T17:02:59:986891+05:00</local_time><bik>INLMKZKA</bik><fileName>stmt_dbz.pdf</fileName><size>195992</size><iin>710416303014</iin><agrementNumber>4400863944</agrementNumber><agrementDate>17.02.2024</agrementDate><referenceId>HKBRZA0000388352</referenceId><bankCode>Halyk bank</bankCode><result>OK</result></log>        
@kiran_panchavat doing the same now... created app under etc/deployment-apps and created props and transforms here and pushed to CM and then pushed to indexers through CM. Remaining configs were work... See more...
@kiran_panchavat doing the same now... created app under etc/deployment-apps and created props and transforms here and pushed to CM and then pushed to indexers through CM. Remaining configs were working fine only facing this issue.
@Karthikeya  Your configurations need to be applied either by a Heavy Forwarder before indexing, or by the Indexers during indexing. Since you don't have a HF, focus on correctly configuring index-t... See more...
@Karthikeya  Your configurations need to be applied either by a Heavy Forwarder before indexing, or by the Indexers during indexing. Since you don't have a HF, focus on correctly configuring index-time extraction on your indexers. Simply having the files on the DS isn't enough, they must be deployed to the indexers, and the indexers must be configured to use them. So you can use Cluster master to push the configurations to the indexers.
Deploy the fields.conf changes to the search head. Do I need to do this as well? If yes, how I need to configure this?
@kiran_panchavat I am not sure given in the same way but no luck....
@gcusello  there are no heavy forwarders. We have a syslog with UF installed on it forwards the data to our deployment server. I have written props and transforms in DS and then pushed to CM and to i... See more...
@gcusello  there are no heavy forwarders. We have a syslog with UF installed on it forwards the data to our deployment server. I have written props and transforms in DS and then pushed to CM and to indexers. Where am I doing mistake?
Hi @alexeysharkov , don't use the table command before timechart and please share some raw data. Ciao. Giuseppe
@Karthikeya  Additionally, I have tested this in my lab, and it's working fine. Please take a look.   Sample events:- rsaid="/alpha-A-01/ALPHA-CK-GSPE/v-alpha.linux.com-101" rsaid="/... See more...
@Karthikeya  Additionally, I have tested this in my lab, and it's working fine. Please take a look.   Sample events:- rsaid="/alpha-A-01/ALPHA-CK-GSPE/v-alpha.linux.com-101" rsaid="/beta-B-02/BETA-CK-GSPE/v-beta.linux.com-102" rsaid="/gamma-G-03/GAMMA-CK-GSPE/v-gamma.linux.com-103" rsaid="/delta-D-04/DELTA-CK-GSPE/v-delta.linux.com-104" rsaid="/epsilon-E-05/EPSILON-CK-GSPE/v-epsilon.linux.com-105" rsaid="/zeta-Z-06/ZETA-CK-GSPE/v-zeta.linux.com-106" rsaid="/eta-H-07/ETA-CK-GSPE/v-eta.linux.com-107" rsaid="/theta-T-08/THETA-CK-GSPE/v-theta.linux.com-108" rsaid="/iota-I-09/IOTA-CK-GSPE/v-iota.linux.com-109" rsaid="/kappa-K-10/KAPPA-CK-GSPE/v-kappa.linux.com-110" rsaid="/lambda-L-11/LAMBDA-CK-GSPE/v-lambda.linux.com-111" rsaid="/mu-M-12/MU-CK-GSPE/v-mu.linux.com-112" rsaid="/nu-N-13/NU-CK-GSPE/v-nu.linux.com-113" rsaid="/xi-X-14/XI-CK-GSPE/v-xi.linux.com-114" rsaid="/omicron-O-15/OMICRON-CK-GSPE/v-omicron.linux.com-115" rsaid="/pi-P-16/PI-CK-GSPE/v-pi.linux.com-116" rsaid="/rho-R-17/RHO-CK-GSPE/v-rho.linux.com-117" rsaid="/sigma-S-18/SIGMA-CK-GSPE/v-sigma.linux.com-118" rsaid="/tau-T-19/TAU-CK-GSPE/v-tau.linux.com-119" rsaid="/upsilon-U-20/UPSILON-CK-GSPE/v-upsilon.linux.com-120" rsaid="/phi-F-21/PHI-CK-GSPE/v-phi.linux.com-121" rsaid="/chi-C-22/CHI-CK-GSPE/v-chi.linux.com-122" rsaid="/psi-PS-23/PSI-CK-GSPE/v-psi.linux.com-123" rsaid="/omega-W-24/OMEGA-CK-GSPE/v-omega.linux.com-124"  
Hi @Karthikeya , two questions: did you followed the instructions at https://docs.splunk.com/Documentation/Splunk/9.4.0/Data/Configureindex-timefieldextraction ? you located the conf files on Ind... See more...
Hi @Karthikeya , two questions: did you followed the instructions at https://docs.splunk.com/Documentation/Splunk/9.4.0/Data/Configureindex-timefieldextraction ? you located the conf files on Indexers (using the Cluster Manager), but, are tehre some intermediate Heavy Forwarders between the data sources and the Indexers? in the first case, follow the instructions. In the second case, put the conf files on the first Heavy Forwarders where data are passing through. Ciao. Giuseppe
I am trying to extract field at index time. Hence I have given following in my cluster master and pushing to indexers but field not getting extracted -  transforms.conf [idname_extract] SOURCE_... See more...
I am trying to extract field at index time. Hence I have given following in my cluster master and pushing to indexers but field not getting extracted -  transforms.conf [idname_extract] SOURCE_KEY = _raw REGEX = rsaid="\/[^\/]+\/([^\/]+)\/ FORMAT = idname::$1 WRITE_META = true   props.conf   TRANSFORMS-1_extract_idname = idname_extract   Field not getting extracted once indexing is done.   But when I am getting in search extraction is happening which means my rex is correct but index time it is failing.   |rex "rsaid=\"\/[^\/]+\/(?<idname>[^\/]+)\/"   Raw field -    rsaid="/saturn-X-01/SATURN-CK-GSPE/v-saturn.linux.com-44"   Need to extract idname=SATURN-CK-GSPE at index time. Am I missing something?  
Hi Giuseppe  yeap , now span divide timeline correctly but count incorrect. Only at hour i see count      
i think i might have found something...  i tested this by setting the phoneHomeIntervalInSecs = 3600 so that it only pulls the updates every hour  and i found this REST call under REST API Referenc... See more...
i think i might have found something...  i tested this by setting the phoneHomeIntervalInSecs = 3600 so that it only pulls the updates every hour  and i found this REST call under REST API Reference Manual  and i tried to do so ...but the /reload does not seem to force the client to do phoneHome but i found another option in the response that i am able to use  <link href="/services/deployment/client/config" rel="list"/> <link href="/services/deployment/client/config" rel="edit"/> <link href="/services/deployment/client/config/reload" rel="reload"/> <link href="/services/deployment/client/config/sync" rel="sync"/> <content type="text/xml">   it is /sync  so i tried using  curl -k -u username:pass -X POST https://<IP>:8089/services/deployment/client/deployment-client/sync and it is helping me to have the client do phoneHome when i hit this url. it also pulled the app updates which i did.  This worked for me . 
@uagraw01  This "Bad Allocation" error often indicates that the server is running out of memory while processing the request. It can occur during large searches or when the server's memory resources... See more...
@uagraw01  This "Bad Allocation" error often indicates that the server is running out of memory while processing the request. It can occur during large searches or when the server's memory resources are insufficient. This error "HTTP Status 400 (Bad Request)" typically means that the request sent to the server was malformed or incorrect in some way. You might want to check the request syntax and ensure all required parameters are correctly formatted. Check the below resources :  https://community.splunk.com/t5/Reporting/Searches-fail-with-quot-bad-allocation-quot-error/m-p/197630  https://docs.splunk.com/Documentation/Splunk/9.4.0/SearchReference/Noop  https://docs.splunk.com/Documentation/Splunk/9.4.0/Search/Comments  I would recommend you raise a support ticket.