All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello community We installed the CIM app in our SH cluster. The installation seemed to work as expected, all defaults no modifications. Afterwards when trying to launch the app we landed on a "ta_ni... See more...
Hello community We installed the CIM app in our SH cluster. The installation seemed to work as expected, all defaults no modifications. Afterwards when trying to launch the app we landed on a "ta_nix_configuration" page, trying to access the "cim_setup" page we got a 404. We removed the app, rolled all members and re-installed. Once again it all seemed to work just fine. This time around we can access the cim_setup page, though if we try to access the "<baseurl>/app/Splunk_SA_CIM/" directly or use the "launch app" link in the GUI we land on the "<baseurl>/app/Splunk_SA_CIM/ta_nix_configuration" site. Is this somehow the expected behaviour or have we got some crossed wires somewhere?
I am afraid its on Version:8.2.11.2 That answers it.   Assume there is nothing else we can try on the version I am on please?   Best Regards  
I am having two counts in the dashboard one is the total count and other is error count to get the success count I want the difference. How can we do that. index=US_WHCRM_int   (sourcetype="bmw-crm-... See more...
I am having two counts in the dashboard one is the total count and other is error count to get the success count I want the difference. How can we do that. index=US_WHCRM_int   (sourcetype="bmw-crm-wh-xl-cms-int-api" severity=INFO ("*Element*: bmw-cm-wh-xl-cms-contractWithCustomers-flow/processors/2/processors/0 @ bmw-crm-wh-xl-cms-int-api:bmw-crm-wh-xl-cms-api-impl/bmw-cm-wh-xl-cms-contractWithCustomers*") OR "*flow started put*contractWithCustomers" OR "*flow started put*customers:application*" OR "ERROR Message" OR "flow started put*contracts:application*") OR (sourcetype="bmw-crm-wh-xl-cms-int-api" severity=ERROR "Error Message") | rex field=message "(?<json_ext>\{[\w\W]*\})" | rex field=message "put:\\\\(?<Entity>[^:]+)" | rename attributes{}.value.details as details | rename properties.correlationId as correlationId | table _time properties.* message json_ext details Entity | spath input=json_ext | stats count by Entity Using | stats count by Entity and | stats count by title  I am getting two counts how can I find the difference between the Entity and title count
https://docs.splunk.com/Documentation/Splunk/9.1.1/Indexer/Usemaintenancemode That's one thing to read. Probably others might add their three cents on this. Furthermore, remember to raise your rete... See more...
https://docs.splunk.com/Documentation/Splunk/9.1.1/Indexer/Usemaintenancemode That's one thing to read. Probably others might add their three cents on this. Furthermore, remember to raise your retention periods for the indexes. Otherwise when you power the cluster back on in some time, it might happily roll buckets to frozen because they will already be old enough.
The valid/working link is here - https://docs.splunk.com/Documentation/Splunk/latest/Viz/DashboardPDFs#Limitations_to_PDF_generation.
I have a Splunk Enterprise Cluster that doesn't get new data ingested anymore. But the existing indexes should remain searchable for a while still. Since search usage is only sporadic I'd like to sav... See more...
I have a Splunk Enterprise Cluster that doesn't get new data ingested anymore. But the existing indexes should remain searchable for a while still. Since search usage is only sporadic I'd like to save on infrastructure cost and hibernate the whole cluster. Only bring it up again when someone needs to search the old data, and hibernate the cluster again.   How would I do this best? My environment consists of search head cluster with 2 members and an indexer cluster with 6 members. My understanding is, as soons as I start to stop indexers, the cluster would try to rebalance the data in the remaining indexer nodes. That seems suboptimal since I need to stop all the instances eventually and don't want to end up with a single indexer node holding all the data.   Any ideas?  
index="oncall_hp" currentPhase=UNACKED routingKey=*event* entityDisplayName!=*Local-Self-Monitoring* | dedup incidentNumber | timechart count span=1h | fillnull value=0 count | eval Heure = strftime(... See more...
index="oncall_hp" currentPhase=UNACKED routingKey=*event* entityDisplayName!=*Local-Self-Monitoring* | dedup incidentNumber | timechart count span=1h | fillnull value=0 count | eval Heure = strftime(_time, "%H") | stats sum(count) as Events by Heure
Hi @mukhan1, a flat file, for my knowledge is a text file continously updated. If your flat file is a text file continously updated Splunk can read it. Ciao. Giuseppe
Hi all, I have a csv file which contains the some of the counts. I have a set of groups like a,b,c,d which are the values of multiselect dropdown and it is dynamic and the columns in the csv is new_... See more...
Hi all, I have a csv file which contains the some of the counts. I have a set of groups like a,b,c,d which are the values of multiselect dropdown and it is dynamic and the columns in the csv is new_a_Total,new_a_added,new_a_removed,new_b_Total,new_b_added,new_b_removed,new_c_Total,new_c_added,new_c_removed,new_b_Total,new_b_added,new_b_removed. When i select more than one value from the multiselect dropdown i want to add all the respective total, added and removed of the selected groups to show it in the timechart. For example if a and b is selected, it should add new_a_Total and new_b_Total will be renamed as Total, new_a_added and new_b_added will be renamed as added, like this all the respective datas will be added and shown as single result. How can i achieve this? Currently i am trying it out using for each. Any suggestions would be really helpful.
@gcusello just for ur understanding , No, this not .txt file this is the flat file
Hi @mukhan1, I'm confident that you can read this file with Splunk: Splunk can read every kind of text file! Check if the path and filename i the stanza header is correct and then check if the user... See more...
Hi @mukhan1, I'm confident that you can read this file with Splunk: Splunk can read every kind of text file! Check if the path and filename i the stanza header is correct and then check if the user you're using to run Splunk can read that file. Ciao. Giuseppe
Hey @gcusello, Already checked the outputs.conf file it is working fine, i don't think this issue is related to outputs.conf because if the issue is with outputs.conf then other path of logs also ... See more...
Hey @gcusello, Already checked the outputs.conf file it is working fine, i don't think this issue is related to outputs.conf because if the issue is with outputs.conf then other path of logs also failed to send logs however im receiving logs from same Host but unable to fetch .logs file into Splunk. This file is actually have code in it, whenever i open this file and run command then it will give me some logs against the command i run. I want to ingest those logs into Splunk. Please remember once the file is close the data will wiped also. No other records of these logs. I think Splunk doesn't support this type of file ingestion with Splunk.  
@meshorer you just need to update the "container_type" key on the container to "default".  REST is usually the best mechanism for this but the `phantom.update()` call might also work.  API: https:... See more...
@meshorer you just need to update the "container_type" key on the container to "default".  REST is usually the best mechanism for this but the `phantom.update()` call might also work.  API: https://docs.splunk.com/Documentation/SOARonprem/6.1.1/PlaybookAPI/ContainerAPI#update  REST:  https://docs.splunk.com/Documentation/SOARonprem/6.1.1/PlatformAPI/RESTContainers 
HI everyone    did you find this answer? i am also looking for same. covert time stamp 2023-10-20T05:30:00+05:30  to date      @AdsicSplunk @ECovell @ppablo  @splunkdate @Anonymous @Anonymous 
Hello @ITWhisperer, Thank you for your help. Indeed, it displays empty columns. However, I notice the time is added to the graph but for each day present in the graph: I can delete the disp... See more...
Hello @ITWhisperer, Thank you for your help. Indeed, it displays empty columns. However, I notice the time is added to the graph but for each day present in the graph: I can delete the display for each day but it only works well on one day: Because over several days, we group by time slot but it does not add the values ​​of the same time slot. I tried adding everything up but it doesn't respond at all after that. Do you know how to just add everything up for each time slot? Best regards, Rajaion  
Dear All, Please suggest how to create separate incident review dashboard for different team. OR How the notable will separated base on Teams.  i.e. Windows Team - Windows Team can only check wind... See more...
Dear All, Please suggest how to create separate incident review dashboard for different team. OR How the notable will separated base on Teams.  i.e. Windows Team - Windows Team can only check windows related notable  Unix Team -Linux Team can only check Unix related notable  SOC Team - Soc Team can check all the notable 
The issue is not the chart, it is with your stats table. Because no data exists for those hours, there are no rows (and therefore no corresponding column). Using timechart instead of stats will fill ... See more...
The issue is not the chart, it is with your stats table. Because no data exists for those hours, there are no rows (and therefore no corresponding column). Using timechart instead of stats will fill in the missing time slots, you then just need to convert the null counts to zeroes. Try something like this index="oncall_hp" currentPhase=UNACKED routingKey=*event* entityDisplayName!=*Local-Self-Monitoring* | dedup incidentNumber | timechart count span=1h | fillnull value=0 count | eval Heure = strftime(_time, "%H") | rename count AS Events | sort 0 Heure  
Hi @maede_yavari, good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma ... See more...
Hi @maede_yavari, good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hello community, I'm encountering a problem that's probably simple to correct, but no matter how hard I try, I can't do it. I have a query that returns several results that I count according to the ... See more...
Hello community, I'm encountering a problem that's probably simple to correct, but no matter how hard I try, I can't do it. I have a query that returns several results that I count according to the time range. This allows me to provide a graph showing the hourly load. However, I noticed that when there was no result over a time range (for example between 3:00 a.m. and 4:00 a.m.), the graph does not appear in full, I am missing the time range in question : Here is my current query: index="oncall_hp" currentPhase=UNACKED routingKey=*event* entityDisplayName!=*Local-Self-Monitoring* | dedup incidentNumber | eval Heure = strftime(_time, "%H") | stats count by Heure | rename count AS Events | sort Heure I tried to force the appearance of a "0" value if there was nothing but that didn't change: index="oncall_hp" currentPhase=UNACKED routingKey=*event* entityDisplayName!=*Local-Self-Monitoring* | dedup incidentNumber | eval Heure = strftime(_time, "%H") | stats count by Heure | rename count AS Events | eval Events=if(isnull(Events) OR len(Events)==0, "0", Events) | sort Heure   I looked on the forum to see if other people had had this problem but I couldn't find it (or I didn't look well). Do you have an idea to simply add a "0" value if a time slot is empty, and that adds it to the graph? Best regards, Rajaion
it seems to be   I have also the message Datamodel 'TEST' had an invalid search, cannot get indexes to search