All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I disabled selinux, fapolicyd, and firewalld, but it still happens. Although, I think we may have narrowed it down to an in-house script that runs nightly! Thanks for the help! 
As the title suggests, I got some SSL certs from my teams, but because the default SSL port is 8443, it's not recognizing the certificates.  I'm kind of a noob to certificates, though, so I hope I'm... See more...
As the title suggests, I got some SSL certs from my teams, but because the default SSL port is 8443, it's not recognizing the certificates.  I'm kind of a noob to certificates, though, so I hope I'm explaining it right. 
 I still have on last small issue: so what the spl does, if values are missing it adds true to the last ones not the correct ones : Example :  -config Conf-console -ntpsync no --check_core no ... See more...
 I still have on last small issue: so what the spl does, if values are missing it adds true to the last ones not the correct ones : Example :  -config Conf-console -ntpsync no --check_core no shoul give : Flag : config Value: Conf Flag : console Value: true Flag : ntpsync Value: no Flag : check_core Value: no but it extract all the values and then add true to the last values so it gives this which is incorrect: Flag : config Value: Conf Flag : console Value: no Flag : ntpsync Value: no Flag : check_core Value: true so I think if you can show me a way to move those without values to the last so they will be matched with the true values, it would be really helpful
In my indexes.conf there isn't a stanza specified for source type the lines i have are [example] coldPath coldToFrozenDir enableDataIntergrity homePath thawedpath   The index in the props... See more...
In my indexes.conf there isn't a stanza specified for source type the lines i have are [example] coldPath coldToFrozenDir enableDataIntergrity homePath thawedpath   The index in the props.conf that works is   [source::C:\\examplelogs*] EXTRACT-logs = Hostname>(?<hostname>(.*?))</Hostname
Hello, everyone In case you missed it, there is a Smart Agent article in the Knowledge Base: Just one Smart Agent: unlimited agent management control  The included 9-minute video includes an ov... See more...
Hello, everyone In case you missed it, there is a Smart Agent article in the Knowledge Base: Just one Smart Agent: unlimited agent management control  The included 9-minute video includes an overview of Smart Agent and it’s real-world benefits, followed by a demonstration of its capabilities right in the UI. Along with the video, the article lists key points discussed accompanied by timestamps so you can find them in the video. Do let me know if there's anything I can do to make this content more usable for you. Thanks! Claudia
It works when I use below query, ....| spath path=lint-info.-Wunused-but-set-variable{} output=members | stats count by members InstanceName   But I don't know the values of Type. If there are mo... See more...
It works when I use below query, ....| spath path=lint-info.-Wunused-but-set-variable{} output=members | stats count by members InstanceName   But I don't know the values of Type. If there are more than 1 type, query should automatically break into individual events. 
I have a json which I need help with breaking into key value pair.          "lint-info": { "-Wunused-but-set-variable": [ { "location": { ... See more...
I have a json which I need help with breaking into key value pair.          "lint-info": { "-Wunused-but-set-variable": [ { "location": { "column": 58, "filename": "ab1", "line": 237 }, "source": "logic [MSGG_RX_CNT-1:0][MSGG_RX_CNT_MAXWIDTH+2:0] msgg_max_unrsrvd_temp; // temp value including carry out", "warning": "variable 'msgg_max_unrsrvd_temp' is assigned but its value is never used" }, { "location": { "column": 58, "filename": "ab2", "line": 254 }, "source": "logic msgg_avail_cnt_err; // Available Counter update error detected", "warning": "variable 'msgg_avail_cnt_err' is assigned but its value is never used" } ], "-Wunused-genvar": [ { "location": { "column": 11, "filename": "ab3", "line": 328 }, "source": "genvar nn,oo;", "warning": "unused genvar 'oo'" } ], "total": 3, "types": [ "-Wunused-but-set-variable", "-Wunused-genvar" ] },           I need to get a table with Type, filename, line values like below   Type                                                  Filename       Line           -Wunused-but-set-variable.    ab1.                   237 -Wunused-but-set-variable.    ab2                 254 -Wunused-genvar                        ab3              328     Thanks    
The where command does not support the IN operator.  It does support the in function, which has a different syntax. The point of my original reply to say that extra code to force a set of values int... See more...
The where command does not support the IN operator.  It does support the in function, which has a different syntax. The point of my original reply to say that extra code to force a set of values into a comma-separated list for the benefit of the IN operator is wasted effort.  The interpreter is just going to convert that comma-separated list into a series of OR operators so you might well just take the raw result from the subsearch (without usingIN).
Noted on that, but, this throws me an Error in 'where' command: The expression is malformed. Expected ).  index="main" label=x source="C:\\Users\\me\\Documents\\test22.csv" | eval hm = replace(ho... See more...
Noted on that, but, this throws me an Error in 'where' command: The expression is malformed. Expected ).  index="main" label=x source="C:\\Users\\me\\Documents\\test22.csv" | eval hm = replace(hostname,",","") | where hm IN ([search index=main label=y userid=tom | fields associateddev | eval list_value = replace(associateddev,"{'","") | eval list_value = replace(list_value,"'}","") | eval list_value = split(list_value,"', '") | mvexpand list_value | stats values(list_value) as search]) but this works assuming i dont do any operations to hostname column. is it possible to insert some eval on hostname before doing the IN operation?  index="main" label=x source="C:\\Users\\me\\Documents\\test22.csv" hostname IN ([search index=main label=y userid=tom | fields associateddev | eval list_value = replace(associateddev,"{'","") | eval list_value = replace(list_value,"'}","") | eval list_value = split(list_value,"', '") | mvexpand list_value | stats values(list_value) as search])
Hi @Ajit.kunjir, Thanks for asking your question on the Community. Please check out this AppD Docs page to see if it helps. https://docs.appdynamics.com/appd/22.x/latest/en/database-visibility/add... See more...
Hi @Ajit.kunjir, Thanks for asking your question on the Community. Please check out this AppD Docs page to see if it helps. https://docs.appdynamics.com/appd/22.x/latest/en/database-visibility/add-database-collectors/configure-oracle-collectors
Use the transpose command.
Hi @Jahnavi.Vangari, Thanks for asking your question on the Community. I found this Docs page that might be helpful https://docs.appdynamics.com/appd/22.x/latest/en/appdynamics-essentials/alert-a... See more...
Hi @Jahnavi.Vangari, Thanks for asking your question on the Community. I found this Docs page that might be helpful https://docs.appdynamics.com/appd/22.x/latest/en/appdynamics-essentials/alert-and-respond/actions/predefined-templating-variables Be sure to check out the left side navigation for similar articles that may help.
Running the search below gives me a horizontal list of the fields and values where I scroll left to right. How do you change the results to list the fields and values vertically, where I scroll down?... See more...
Running the search below gives me a horizontal list of the fields and values where I scroll left to right. How do you change the results to list the fields and values vertically, where I scroll down? | rest /services/data/indexes splunk_server=* | where title = "main"
The IN operator maps to a series of OR operators (check the Job Inspector) so forcing a set of OR operators into IN-compatible form is a wasted effort.
i have a splunk query below that returns me  ( ( ( list_value2="dev1" OR list_value2="dev2" OR list_value2="dev5" OR list_value2="dev6" ) ) ) i want to use this 4 values as a list to query using ... See more...
i have a splunk query below that returns me  ( ( ( list_value2="dev1" OR list_value2="dev2" OR list_value2="dev5" OR list_value2="dev6" ) ) ) i want to use this 4 values as a list to query using IN operation from another main search as show in the second code snippet. ``` index=main label=y userid=tom | fields associateddev | eval list_value = replace(associateddev,"{'","") | eval list_value = replace(list_value,"'}","") | eval list_value = split(list_value,"', '") | mvexpand list_value | stats values(list_value) as list_value2 | format ``` i want to use the results from this as part of a subsearch to query another source as shown below. ideally, the subsearch will return me a list that i can just call using | where hname IN list_value2. But list_value2 is returning me this ( ( ( list_value2="dev1" OR list_value2="dev2" OR list_value2="dev5" OR list_value2="dev6" ) ) ) weird string. ``` index="main" label=x | where hname IN [search index=main label=y userid=tom | fields associateddev | eval list_value = replace(associateddev,"{'","") | eval list_value = replace(list_value,"'}","") | eval list_value = split(list_value,"', '") | mvexpand list_value | stats values(list_value) as list_value2] | table _time, hname, list_value2 ``` i have tried  | stats values(list_value) as search | format mvsep="," "" "" "" "" "" ""] but i still get the error: Error in 'search' command: Unable to parse the search: Right hand side of IN must be a collection of literals. '(dev1 dev2 dev5 dev6)' is not a literal.
Hi @kevhead , sorry, it was a mistyping: in that installation I had a lookup containg some additional informa that you can delete from the dashboard. Ciao. Giuseppe
For me, it is going to be ongoing thing and not a one time effort. So wondering if there is a way to achieve this
Nice, I tried this and looks like it is working. Question: Does this mean only a part of my log file will be ingested so I am not using the whole log's disk space in my License ? Actually I only want... See more...
Nice, I tried this and looks like it is working. Question: Does this mean only a part of my log file will be ingested so I am not using the whole log's disk space in my License ? Actually I only want to ingest a part of my debug logs (which are huge). Also, can we line break the events after this conversion so we have different events again after ingestion. @darrenfuller @woodcock 
@gcusello Thank you for providing this code for the dashboard. I've implemented it and its working quite well except for the hardware portion which returns a " Error in 'lookup' command: Could not co... See more...
@gcusello Thank you for providing this code for the dashboard. I've implemented it and its working quite well except for the hardware portion which returns a " Error in 'lookup' command: Could not construct lookup 'Server, host, OUTPUT, IP, Tipologia'. See search.log for more details". Any assistance with this would be great thank you!
Hello to everyone! I have a curious situation: I have log files that I collecting via SplunkUF This log file does not contain a whole timestamp - one part of the timestamp is contained in the file... See more...
Hello to everyone! I have a curious situation: I have log files that I collecting via SplunkUF This log file does not contain a whole timestamp - one part of the timestamp is contained in the file name, and the other is placed directly in the event As I found in the other answers, I have options. 1. INGEST_EVAL on the indexer layer: I did not understand how I could take one part from the source and glue it with _raw data Link to the answer 2. Use handmade script to create a valid timestamp for events - this is more understandable for me, but it looks like "reinventing the wheel" So the question is, may I use the first option if it is possible? This is the an example of the source: E:\logs\rmngr_*\24020514.log * - some number 24 - Year Month - 02 Day - 04 Hour - 14 And this is an example of the event: 45:50.152011-0,CONN,3,process=rmngr,p:processName=RegMngrCntxt,p:processName=ServerJobExecutorContext,OSThread=15348,t:clientID=64658,t:applicationName=ManagerProcess,t:computerName=hostname01,Txt=Clnt: DstUserName1: user@domain.com StartProtocol: 0 Success 45:50.152011 - Minute, Second and Subsecond