In order to be able to debug your code, it might be useful to see your actual code, or at least a cut-down version of your code which demonstrates the problem. Also, does it occur with large dashboar...
See more...
In order to be able to debug your code, it might be useful to see your actual code, or at least a cut-down version of your code which demonstrates the problem. Also, does it occur with large dashboards, or only small ones? Does it occur with fresh browser instances or old? Does it occur with different browsers or just one? Which browser(s) does it occur with? Any other information like this might give a clue as to what it happening.
Hi @quadrant8 , 10k events is the limit of subsearch results: if you run the subsearch as a main search, without anithing, have you more or less of 10K events? if more than 10K events, you have to ...
See more...
Hi @quadrant8 , 10k events is the limit of subsearch results: if you run the subsearch as a main search, without anithing, have you more or less of 10K events? if more than 10K events, you have to find a different solution (e.g. putting the subsearch in the main search with an OR condition, defining a correlation key and checking that the correlation key is present in both the searches. Ciao. Giuseppe
The answer in this splunk blog post. Somewhere in "System Configuration" we can configure integration with ES. Nuance - I opened this settings menu once, but the second time I can’t find it
Hi @harsmarvania57 , I am very appreciate what you did with the script! In your script, you are using the python package "splunk" from splunk import mergeHostPath
import splunk.rest as rest
im...
See more...
Hi @harsmarvania57 , I am very appreciate what you did with the script! In your script, you are using the python package "splunk" from splunk import mergeHostPath
import splunk.rest as rest
import splunk.auth as auth
import splunk.entity as entity But I can't find the package with pip install. could you give the correct name of package?
Hi @tatdat171 Script which I have created is not out of date. It still works for On-prem and SplunkCloud. I would like to know which functions didn't work for you. Thanks, Harshil
Hi Team, We are currently using Classic XML and have made the panels Collapsible/Expandable using HTML/CSS with suggestion from below thread: https://community.splunk.com/t5/Dashboards-Visualizat...
See more...
Hi Team, We are currently using Classic XML and have made the panels Collapsible/Expandable using HTML/CSS with suggestion from below thread: https://community.splunk.com/t5/Dashboards-Visualizations/How-to-add-feature-expand-or-collapse-panel-in-dashboard-using/m-p/506986 However, sometimes during first dashboard load, both the "+" and "-" sign are visible. This happens occasionally, so I am not able to find the cause for this. Do you have any suggestions or ideas to fix this? Thank you!
Better late than never: Sample data would be helpful here. The request is a bit confusing since you seem to want the top 5 urls per status code, but your URL count stops at 10. With 3 status cod...
See more...
Better late than never: Sample data would be helpful here. The request is a bit confusing since you seem to want the top 5 urls per status code, but your URL count stops at 10. With 3 status codes, the top 5 could go to 15, right? For the second point, what UserID would that be? Presumably each URL could be hit by multiple users, and the top 5 codes for each URL would differ per user.
First load the lookups and then group both realms using stats. Try to do something like this and adjust it to your needs, assuming there is a field that is common in both data sets: |inputlooku...
See more...
First load the lookups and then group both realms using stats. Try to do something like this and adjust it to your needs, assuming there is a field that is common in both data sets: |inputlookup lookup1
|inputlookup lookup2 append=true
| stats values(fieldA) AS fieldA (...) by fieldB_common_in_both_datasets If there is not common field, use rename or eval to create that common field before the stats: | inputlookup lookup1
| inputlookup lookup2 append=true
| rename fieldC as fieldB
| stats values(fieldA) AS fieldA (...) by fieldB
Better late than never: This needs more information on what you consider month and week boundaries. Does "January, Week 1" mean the first 7 days of January, January 1st to the last day of the week...
See more...
Better late than never: This needs more information on what you consider month and week boundaries. Does "January, Week 1" mean the first 7 days of January, January 1st to the last day of the week (e.g. Saturday), or the Sunday before January 1st, to the following Saturday? When you say by week and month together, do you just want a label for the month in front of the 52 weeks?
I've seen the documentation which says "by default subsearches return a maximum of 10,000 results and have a maximum runtime of 60 seconds", but it's unclear if that limit is before or after applying...
See more...
I've seen the documentation which says "by default subsearches return a maximum of 10,000 results and have a maximum runtime of 60 seconds", but it's unclear if that limit is before or after applying transforms. e.g. does it apply to the base search (e.g. the output of index=wineventlogs AND ComputerName=MyDesktop is capped at 10k) or if the filtered results (e.g. if I add conditions and filter to reduce the final dataset) is where any results over 10k will be dropped?
Usually, instead of using join, you can replace it by stats and will be a lot better in performance. Try to do something like this and adjust it to your needs: index=INDEXA OR index=INDEXA
| stats ...
See more...
Usually, instead of using join, you can replace it by stats and will be a lot better in performance. Try to do something like this and adjust it to your needs: index=INDEXA OR index=INDEXA
| stats values(fieldB) AS fieldB values(fieldC) AS fieldC values(fieldX) AS fieldX values(fieldY) AS fieldY values(fieldZ) AS fieldZ by fieldA
| fillnull value=unknown fieldZ
| stats count(fieldB) AS fieldB count(fieldC) AS fieldC count(fieldX) AS fieldX count(fieldY) AS fieldY by fieldA, fieldZ First use OR to merge the info from both indexes and use stats to group the other fields by fieldA. Then, assuming there will be gaps of information in some fiels, usa can use fillnull to fill those gaps. Then, count all fields by fieldA and fieldZ. Also check this post: https://community.splunk.com/t5/Splunk-Search/Replace-join-with-stats-to-merge-events-based-on-common-field/m-p/321060
Better late than never: Assuming your windows host monitoring polls the hosts at regular intervals and logs a success or failure, and if you want a simple line chart with values 1 for up and 0 for...
See more...
Better late than never: Assuming your windows host monitoring polls the hosts at regular intervals and logs a success or failure, and if you want a simple line chart with values 1 for up and 0 for down in some interval (say 10 minutes), you could do this: sourcetype=WinHostMon | eval status_num=if(Status="up",1,0) | timechart span=10m min(status_num) by Host
@gcusello : Thanks for your response. Story in short, I want to map certificate details from one of the sources to fields in certificate datamodel. https://docs.splunk.com/Documentation/CIM/5.3.2...
See more...
@gcusello : Thanks for your response. Story in short, I want to map certificate details from one of the sources to fields in certificate datamodel. https://docs.splunk.com/Documentation/CIM/5.3.2/User/Certificates. This is my requirment. I have mapped two fields using FIELDALIAS - ssl_issuer and ssl_end_time. Now I want to map TagData.Email to ssl_issuer_email. I am using these fields further. Regards, PNV
Let say I have 2 lookup files , lookup1 has 50 values and other have 150 values so when I inner join lookup1 to lookup 2 it gives me low results but when i reverse it results change and are higher.
below is my scenario described by Oracle DBA I have two indexes INDEXA fieldA fieldB fieldC INDEXB fieldA fieldX fieldY fieldZ First I need to join them both, it will be kind of LEFT JO...
See more...
below is my scenario described by Oracle DBA I have two indexes INDEXA fieldA fieldB fieldC INDEXB fieldA fieldX fieldY fieldZ First I need to join them both, it will be kind of LEFT JOIN as you porbably noticed by fieldA. Then group it by filedA+FieldZ and count each group. In DBA language something like : select a.fieldA, b.filedZ, count(*) from indexA A left join indexB B on a.fieldA=b.fieldA group by a.fieldA, b.filedZ any hints ? K.
Hi @Poojitha , the first question is why? create fields at index time gives additional load to the indexers during indexing, this is possibe if you haven't a big volume of data. anyway you have to...
See more...
Hi @Poojitha , the first question is why? create fields at index time gives additional load to the indexers during indexing, this is possibe if you haven't a big volume of data. anyway you have to use the way to create fields at index time descripted at https://docs.splunk.com/Documentation/Splunk/9.2.1/Data/Configureindex-timefieldextraction an ingestions eval then you have to use an ingest eval action descripted at https://docs.splunk.com/Documentation/Splunk/9.2.1/Data/IngestEval in props.conf [your_sourcetype]
TRANSFORMS-eval1 =eval1 in transforms: [eval1]
INGEST_EVAL = field3=json_extract(email,Tagdata{}.Email) (please check the path of your json field in fields.conf [username]
INDEXED=true Ciao. Giuseppe
Hi All, TagData [ [-]
{ [-]
Key: Application
Value: Test_App
}
{ [-]
Key: Email
Value: test@abc.com
}
] I have nested json data as ...
See more...
Hi All, TagData [ [-]
{ [-]
Key: Application
Value: Test_App
}
{ [-]
Key: Email
Value: test@abc.com
}
] I have nested json data as above. I want to extract Email field value and map it to new field - owner_email . This need to be done during indexing time. With normal splunk search , I am getting way : index=*_test sourcetype="test:sourcetype" source="*:test"
| array2object path="TagData" key="Key" value="Value"
| rename "TagData.Email" as owner_email Please help me how to achieve this during indexing time. How do I update props.conf file ? Regards, PNV