All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Does anybody have an idea?  Thank You
I recently updated Splunk to the latest version. When I did this our Universal Forwarders and Heavy Forwarders stop showing up under forwarder management. It is showing under monitoring console and i... See more...
I recently updated Splunk to the latest version. When I did this our Universal Forwarders and Heavy Forwarders stop showing up under forwarder management. It is showing under monitoring console and is showing data is flowing amongst the servers. I created edit deploymentclients.conf file to use FQDN , IP followed by port 8089 as well but nothing is working to allow forwarders or heavy forwarders to show up.
@nieminej  Confirm that the app is consistently named base_app (not base_uf) across the deployment server, UF directory (C:\Program Files\SplunkUniversalForwarder\etc\apps), and serverclass.conf.  ... See more...
@nieminej  Confirm that the app is consistently named base_app (not base_uf) across the deployment server, UF directory (C:\Program Files\SplunkUniversalForwarder\etc\apps), and serverclass.conf.    Check the contents of base_app on the deployment server ($SPLUNK_HOME/etc/deployment-apps/base_app) and ensure it matches what’s expected on the UF after deployment.   Manually install the UF on a test workstation (bypassing SCCM) and configure it to phone home to the deployment server. Does the issue persist? This isolates whether SCCM is a factor.   I suspect the deployment server is instructing the UF to uninstall base_app due to either, A mismatch between the app’s expected state (as defined in serverclass.conf) and its actual state on the UF after SCCM deployment (OR) A misconfiguration in base_app’s configs causing the UF to misinterpret its deployment instructions.
Hi @majlo333 , you should try to use the lookup command ( https://docs.splunk.com/Documentation/Splunk/9.4.1/SearchReference/Lookup  index=myindex | eval urgency="medium", account_name='awsMetad... See more...
Hi @majlo333 , you should try to use the lookup command ( https://docs.splunk.com/Documentation/Splunk/9.4.1/SearchReference/Lookup  index=myindex | eval urgency="medium", account_name='awsMetadata.account.name' | lookup your_lookup.csv project_name AS account_name OUTPUT env | stats values(env) AS env values(urgency) as urgency BY account_name Ciao. Giuseppe  
Hi @Gryphus , are only thise three indexes not fully searchable or all the indexes? Ciao. Giuseppe
We have clustered Deployment Servers (with NFS shared drive) because we have total of clients tens of thousands at the final situation and we have deployed UF to Workstations and we have workstaion s... See more...
We have clustered Deployment Servers (with NFS shared drive) because we have total of clients tens of thousands at the final situation and we have deployed UF to Workstations and we have workstaion serverclass and few apps on it, including base_app which includes deploymentclient.conf, outputs.conf, server.conf and certificates. And when UF Agent is installed to Worstations trought SCCM it phoneshome and then it just tells Serverclass=workstations is uninstalling app=C:\ProgramFiles\SplunkUniversalForwarder\etc\apps\base_uf There is crossServerChecksum tried with true or false and no changes. We can't figure out it from any logs or so, there is nothing errors it just tells that it started to uninstall app and then restarts UF and loses connections. If we check one unique client it belongs only to one Serverclass, and Worstations Serverclass include our base_app and then Splunk_TA_windows and sysmon apps. We have version 9.4.1 on our Enterprise and UF's have 9.3.2, phonehomes coming trough F5 LoadBalancer. We are running out of ideas with this. 
Hi, I have a query that goes something like this: index=myindex  | eval urgency="medium", account_name='awsMetadata.account.name' | stats count values(account_name) as account_name, values(ur... See more...
Hi, I have a query that goes something like this: index=myindex  | eval urgency="medium", account_name='awsMetadata.account.name' | stats count values(account_name) as account_name, values(urgency) as urgency I also have a CSV file which has the following columns and values env, project_name prod,prod_account11 dev,dev_account3 prod,prod_account55 qa,qa_account43 I wish to compare each of the results in the query above using "account_name" field with CSV file field "project_name", and if those two values match for each result, I wish to create a new field "env" in my results based on the "env" field from CSV file.  eg. if query result "prod_account55" from account_name field is found in CSV file "prod_account55" from project_name field, extract prod value from env field as a new field in the results. 
Assuming individual scopeSpans are unique (which is likely since they contain timestamps and ids), try something like this | spath resourceSpans{}.scopeSpans{}.spans{} output=scopeSpans | stats coun... See more...
Assuming individual scopeSpans are unique (which is likely since they contain timestamps and ids), try something like this | spath resourceSpans{}.scopeSpans{}.spans{} output=scopeSpans | stats count by scopeSpans | spath input=scopeSpans
Hi Thanks for your interest. I have put a larger answer below my original one, I hope this gives you what you were looking for. Rob
Hi  Thanks for getting back to me I put more details below my original answer - There should be a 1-2-1 as A parent can be empty, this means that that SPAN has no parent. (So I might need to put in... See more...
Hi  Thanks for getting back to me I put more details below my original answer - There should be a 1-2-1 as A parent can be empty, this means that that SPAN has no parent. (So I might need to put in logic there, to put it as NA, to make sure the data line up correctly! ) I am not sure how to "create composite fields"?   Thanks in Advance
Hello Co-Splunkers, Greetings, I had a point to fix in splunk visualisation (cluster map). As i am plotting the values based on LAT and LON values as a result its fetching good, but is there an... See more...
Hello Co-Splunkers, Greetings, I had a point to fix in splunk visualisation (cluster map). As i am plotting the values based on LAT and LON values as a result its fetching good, but is there any scope to plot the continuous line instead of bubble marks . I need to make that bubble points to connecting line , like we see in google maps how its connect from start point to the end point. Please take a look of snaps attached. Thanks  in advance for your responses and  for time spent on my Question.  The above image is my data  Like the above snap i need to cover .  
thanks for the draw.io icons
Interactive in the sense, you can click in each and it can drill down into further details. The main reason for diagram as a code is, we can maintain in version control
Hi Thank you all for your future help on this. Below is one example of an event I might have (Attached is the RAW data for 4 lines.).  We have multiple Spans in the data. Inside that, we have vari... See more...
Hi Thank you all for your future help on this. Below is one example of an event I might have (Attached is the RAW data for 4 lines.).  We have multiple Spans in the data. Inside that, we have various attributes. I want to be able to put in one traceId: = XYZ and get the start and end, name, etc.. of all the Spans that I have. So I was going to get it in a table format 1 - to - 1 and then when I have that data make tables, graphs, etc..   I can say that when traceID = XYZ, I will have access to the other data. But as you can see I get an error if the data is too big. This is the props I am using   [Market_Risk_DT] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 1000000 NO_BINARY_CHECK = true TIME_FORMAT = %s%3N TIME_PREFIX = \"startTimeUnixNano\":" category = Custom description = Market_Risk_DT disabled = false pulldown_type = true  
I am using Splunk Observability Cloud for Kubernetes monitoring and trying to retrieve data for container CPU limits using the k8s.container.cpu_limit metric, but I'm not getting any data. data('k8s... See more...
I am using Splunk Observability Cloud for Kubernetes monitoring and trying to retrieve data for container CPU limits using the k8s.container.cpu_limit metric, but I'm not getting any data. data('k8s.container.cpu_limit',rollup='average').sum(by=['k8s.container.name','k8s.pod.name', 'k8s.pod.uid', 'k8s.node.name', 'k8s.cluster.name']) Thanks in advance!  
Would these settings also have to be made if I set the retention period for this index to 1 day or possibly 1 week?
Wait a second. You have two separate issues here. One is your search - whether it can be written better (yes, it can) is one question. But another question is how the search is run - how often it i... See more...
Wait a second. You have two separate issues here. One is your search - whether it can be written better (yes, it can) is one question. But another question is how the search is run - how often it is spawned (in your case it's every minute; isn't it a bit too much?), over what time range it is being run and thus how many results it returns and also what is your data ingestion characteristics - how often you get new events? For example, if you're searching every minute over last 15 minutes worth of data, you will hit the same result for about 15 times (the actual searches of course might get delayed or skipped depending on your search load and the schedule type) so unless you use throttling, you'll get 15 separate alerts.
Yes, set up tokens in the searches which make the search invalid when they are not needed e.g. they contain illegal SPL syntax
Hi @livehybrid I tried to use the version 8.4 and the same issue. Luiz
@MrLR_02 , the 1-hourfrozenTimePeriodInSecs will not affect buckets which are "hot" - ie they are actively open and being written to. If your buckets aren’t rolling from hot → warm → cold within an h... See more...
@MrLR_02 , the 1-hourfrozenTimePeriodInSecs will not affect buckets which are "hot" - ie they are actively open and being written to. If your buckets aren’t rolling from hot → warm → cold within an hour, retention will appear longer. The reason a restart causes them to roll to frozen is that the indexer closes the hot bucket when it restarts and thus becomes warm, and can then be frozen out. To enforce deletion 1 hour after ingestion, you may need to review some of the following settings, ive included some examples below:   Force hot buckets to roll faster by setting: Its worth understanding these and configuring as required - check https://docs.splunk.com/Documentation/Splunk/latest/Admin/Indexesconf#:~:text=maxHotSpanSecs%20%3D%20%3Cpositive%20integer%3E for more info.     [your_index] maxHotSpanSecs = 3600 # Hot bucket rolls to warm after 1h maxHotIdleSecs = 60 # Rolls if idle for 1min maxDataSize = auto_high_volume # Or lower to cap hot-bucket size   These ensure hot buckets roll to warm based on time, not just size.   Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will