All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

the universal forwarder does not parse data except in certain limited situations. can anyone tells what are these situations?
After tooling with it more, I think the best approach uses the map command. | makeresults count=2 | streamstats count | eval index = case(count=1, "myindex1", count=2, "myindex2") | outputlookup loo... See more...
After tooling with it more, I think the best approach uses the map command. | makeresults count=2 | streamstats count | eval index = case(count=1, "myindex1", count=2, "myindex2") | outputlookup lookup_of_events | stats count by index | map report_to_map_through_indexes report_to_map_through_indexes | inputlookup lookup_of_events where index="$index$" | collect index="$index$"
Seems that the icon functionality doesn't care for files in $APP/appserver/static path, only pulls up file data from the kvstore, forcing you to somehow transfer this kvstore collection to your SHClu... See more...
Seems that the icon functionality doesn't care for files in $APP/appserver/static path, only pulls up file data from the kvstore, forcing you to somehow transfer this kvstore collection to your SHCluster every time you deploy something. Not cool. Easier to just convert all icons to images by hand, once. We used icons because we thought images don't have the "hideWhenNoData" option. Turns out they do have it, but docs are not too clear. https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/DashStudio/chartsImage https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/DashStudio/showHide
Could you support me, what would this research look like?
Hi Splunkers, I'm deploying a new Splunk Enterprise environment; inside it, I have (for now) 2 HF and a DS. I'm trying to set an outputs.conf file on both HF via DS; clients perform a correct phonin... See more...
Hi Splunkers, I'm deploying a new Splunk Enterprise environment; inside it, I have (for now) 2 HF and a DS. I'm trying to set an outputs.conf file on both HF via DS; clients perform a correct phoning to DS, but then apps are not downloaded. I checked the internal logs and I got no error related to app. I followed doc and course material used during Architect course for references. Below, configuration I made on DS. App name:      /opt/splunk/etc/deployment-apps/hf_seu_outputs/       App file     /opt/splunk/etc/deployment-apps/hf_seu_outputs/default/app.conf [ui] is_visible = 0 [package] id = hf_outputs check_for_updates = 0       /opt/splunk/etc/deployment-apps/hf_seu_outputs/local/outputs.conf [indexAndForward] index=false [tcpout] defaultGroup = default-autolb-group forwardedindex.filter.disable = true indexAndForward = false [tcpout:default-autolb-group] server=<idx1_ip_address>:9997, <idx2_ip_address>:9997, <idx3_ip_address>:9997     serverclass.conf:   [serverClass:spoke_hf:app:hf_seu_outputs] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient = enabled [serverClass:spoke_hf] whitelist.0 = <HF1_ip_address>, <HF1_ip_address>   File and folder permission are right, owner is the user used to execute Splunk (in a nutshell, the owner of /opt/spluk). I suppose it is a very stupid issue, but I'm not able to figured it out.
Hi @fde , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @IlianYotov, do new files have the same name of the previous or a different one? did you checked without the "crcSalt = <SOUCE>" option? Is it possible that the new files have the same content ... See more...
Hi @IlianYotov, do new files have the same name of the previous or a different one? did you checked without the "crcSalt = <SOUCE>" option? Is it possible that the new files have the same content of the previous ones? Ciao. Giuseppe
Would of been nice for Splunk support to mention this.  I've had to move on and decommission Server 2022. Installed 2019 like you suggested and everything is working as it should.  Thanks again.
Hi @Real_captain , please try something like this: index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | transac... See more...
Hi @Real_captain , please try something like this: index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | transaction startswith="IDJO20P" endswith="PIDZJEA" | bin span=1d _time | chart sum(eventcount) AS eventcount OVER _time BY NIDF Ciao. Giuseppe
Hi, My problem is solved. With support guidance, I change IP address used for replication. Here is the steps used to: 1°) stop indexer 2°) change etc/system/local/server.conf and  add register_r... See more...
Hi, My problem is solved. With support guidance, I change IP address used for replication. Here is the steps used to: 1°) stop indexer 2°) change etc/system/local/server.conf and  add register_replication_address parameter with the new IP 3°) rename etc/instance.conf to etc/instance.conf.bkp 4°) restart indexer CM =  Index clustering after few seconds => OK MC reconfigured the Indexer => OK CM and SH  =  Distributed Search =>  2 entry for the same instance name with 2 different Peer URI, the 2 entry appeared down and Sick 5°) I deleted one entry, the one with the old Peer URI => After 5min and a refresh, no more entry Down and Sick. And now, all seems to be ok. IP address for peer URI is now showing the new IP address.  
Hi @avi7326 , as I said, there's no sense to put in the same panel a result from a stats search and a table . use your searches in two different panels. Ciao. Giuseppe
Hi All, Below query to get stats sum of field values of latest correlationId. need to show in pie chart. But i am getting values as other.PFA screenshot   index="mulesoft" *Upcoming Executions* c... See more...
Hi All, Below query to get stats sum of field values of latest correlationId. need to show in pie chart. But i am getting values as other.PFA screenshot   index="mulesoft" *Upcoming Executions* content.scheduleDetails.lastRunTime="*" [search index="mulesoft" *Upcoming Executions* environment=DEV | stats latest(correlationId) as correlationId | table correlationId|format]|rename content.scheduleDetails.lastRunTime as LastRunTimeCount | stats count(eval(LastRunTimeCount!="NA")) as LastRunTime_Count count(eval(LastRunTimeCount=="NA")) as NA_Count by correlationId| stats sum(LastRunTime_Count) as LastRunTime_Count,sum(NA_Count) as NA_Count    
This is what I found and it worked! First of all the message, "Can not communicate with task server......" is vague and does not give a clear idea so it can be so many reasons, few of them are 1.... See more...
This is what I found and it worked! First of all the message, "Can not communicate with task server......" is vague and does not give a clear idea so it can be so many reasons, few of them are 1. Updating the java_home path 2. checking the jre version 3. Checking if the HF has approved license or is connected to the License Manager (its no longer the license master) 4. Change the task server port to 9995 or 1025, instead of 9998 What I Did was this: in the SPLUNK_HOME/var/log/splunk/splunkd.log  is showed some error for dbx-migration.conf so i added these lines by creating dbx-migration.conf in /etc/apps/splunk_app_db_connect/local [encryption] disabled = 0 upgrade = DONE   Then a restart of splunkd. Works super smooth
Hello, I need some help.  I have a folder and an app that writes logs in NDJSON format and creates a new log file every 15 minutes.  The configuration that I use is this:   [monitor:///Users/yot... See more...
Hello, I need some help.  I have a folder and an app that writes logs in NDJSON format and creates a new log file every 15 minutes.  The configuration that I use is this:   [monitor:///Users/yotov/app/.logs/.../*.log] disabled = false sourcetype = ndjson crcSalt = <SOURCE> alwaysOpenFile = 1    The problem is that Splunk Forwarder doesn't detect newly added files. It reads only the files at the start, and detects newly added content in them, but when a new file is added they are ignored until restart of Splunk Forwarder. I'm using the latest version of Splunk Forwarder and tried under Linux and MacOs What am I missing?
I have the same need right now. I am wondering if do you have resolved that after this time? I am trying to round,2 but looks like that Splunk visualization do that in their own, going default to rou... See more...
I have the same need right now. I am wondering if do you have resolved that after this time? I am trying to round,2 but looks like that Splunk visualization do that in their own, going default to round,3 I appreciate any help. Thank you
Hi @avi7326, Please try below query; (index=whcrm OR index=whcrm_int) sourcetype="bmw-sl-gcdm-int-api" ("Sending POST consents to *" OR "Create / Update Consents done" OR "Error in sync-consent-dat... See more...
Hi @avi7326, Please try below query; (index=whcrm OR index=whcrm_int) sourcetype="bmw-sl-gcdm-int-api" ("Sending POST consents to *" OR "Create / Update Consents done" OR "Error in sync-consent-dataFlow:*") | eval Total= if(match(_raw, "Sending POST consents to *"),1,0) | eval Success= if(match(_raw, "Create / Update Consents done"),1,0) | eval Error= if(match(_raw, "Error in sync-consent-dataFlow:*"),1,0) | rex field=message ": (?<json>\{[\w\W]*\})$" | rename properties.correlationId as correlationId | rename properties.gcid as GCID | rename properties.gcid as errorcode | rename properties.entity as entity | rename properties.country as country | rename properties.targetSystem as target_system | stats sum(Total) as Total sum(Success) as Success sum(Error) as Error by correlationId GCID errorcode entity country target_system | eval ErrorRate = round((Error / Total) * 100, 2)  
Hi @ITWhisperer you should filter with _time otherwise you will find unaccurate results probably?
Hi  Thanks for the response. But it gives me the result like below :      I want to have the results as below :   
Hello @jstoner_splunk @fwijnholds_splu I'm finding different results filtering in Incident Review, is that normal? Thanks.  
My understanding is that your UiPath  team, does not know how to export the UiPath logs. You will need to first determine the method's and options available to you in terms of the logs types, they... See more...
My understanding is that your UiPath  team, does not know how to export the UiPath logs. You will need to first determine the method's and options available to you in terms of the logs types, they could be json, csv, xml etc.  So its best to have a look at the UIPath documentation that the vendor provides.  Its also best to design the data flow and ensure the data you want to collect, what format it is , and how to get it into Splunk.  This looks like a SaaS service, so , so you may be able to send the data direct to Splunk's HEC endpoint, rather that than the inputs which is logging to a C:\uipath_logs, so you will need to workout how to export those logs to a file in the folder and use the inputs or another option is send it to Splunk HEC via API, which means you create a Splunk HEC service with a token and send the data there.  There is some documentation for this, but you will need your team to ensure its correct, if not, then contact the vendor and get some information on what options are available to you in terms of UiPath export.  https://docs.uipath.com/insights/automation-suite/2023.4/user-guide/overview-real-time-data-export